Sun 29 May 2016
Comments Off on Learning Analytics: expectations and predictions
I am always wary when it comes to hyping a new technology. As the recent LAK16 global conference has hinted at, Learning Analytics may just have reached the height of the Gardener hype cycle.
Sure, Learning Analytics has its promises to create new insights into learning and a new basis for learner self-reflection or support services. But it is dangerous to expect it to produce “truth about learning”! A forthcoming paper I recently reviewed covers the promising influence LA has on the Learning Sciences and rightly demands that more learning theories should be put at the basis for LA, but, as Paul Kirschner expressed in his keynote presentation, there are many types of learning and often in LA research and development they are simplified and generalised.
To correctly ground our expectations in some sort of reality, we only need to look at areas where data analysis and predictions have long been used to “tell the truth” and to predict the future in order to take appropriate measures: politics, economics, and the weather forecast. Since it is without human unpredictability, the weather forecast has become the most accurate of the data-heavy sciences, yet, even there, the long term predictions still carry a strong element of randomness and guesswork. Do we want to risk the future of students’ lives by basing them on 75% probabilities?
Even where there is higher accuracy, the question may be raised about algorithmic accountability. Who will be held responsible and how can anyone make a claim against a failed prediction. This risk isn’t as present in the commercial world, where an inaccurate shopping suggestion through targeted advertisements can simply be ignored, but in education careers are at stake. From a managerial perspective, while it is scientifically fabulous to have a 75-80% accuracy in predictions of highly specific drop-out scenarios, there is a cost-benefit issue related to this. To simply propose that system alerts should trigger teachers’ attention on particular students, and student support services then need to call up that particular student (which they may actually like as much as a phone call from the bank selling new services) doesn’t cut it. As a cheaper alternative, I ,sarcastically, suggested to use a random algorithm to pick a student for receiving special attention that week.
It is also worth contemplating in how much predictions about the success of learners may become self-fulfilling prophesies. Learning Analytics predictions are typically based on a number of assumptions forming the “student model”. One big assumption is that of a stable teaching/learning environment. If everything runs linear and “on rails” then it is relatively easy to say that the learning train departing from station A will eventually reach station B. However, it’s nowadays well recognised that learning is situated and human teachers are didactically and psychologically influencing the adaptivity of the learning environment. It would, in my mind, require much higher levels of intelligence for algorithms to achieve the same support as human teachers, but if it did, what would then become of our teachers? What would be the role of human teachers if LA and AI take over decision making? What qualities would they need to possess or could they just be obsolete?
We cannot neglect the human social factor in other ways too: quantifying people inevitably installs a ranking system. While a leaderboard scheme based on LA data could be on the one hand a motivating tool for some students (as is the case in serious and other games), it could also lead to apathy in others when they realise they’ll never get to the top. The trouble is that people are being metatagged by analytics and these labels are very difficult to change. They also may exercise a reverse influence on the learner in that such labels become sticky parts of their personality or digital identity.
As so often with innovative approaches, hypes and new technologies, the benefit of Learning Analytics may not lie in what the analytics actually do or how accurate they are, but in a “side-effect” that is somewhat unexpected. I see part of the promise of learning analytics in starting a discussion on how we take decisions.