This presentation by Richard Palmer at the JISC Digifest is a provocation not a promotion for learning analytics! There is so much bias in this piece that it’s hard to decide where to begin with taking it apart. It’s full of empty generalisations and predictions, so I start with issuing a WARNING – this read might damage your views on human teachers.

The argument that computers don’t come hung over, stressed or loaded with bias to work has been used many times, especially at a period in technological evolution that was led by the illusion that computers are infallible and perfectly neutral. For a long time, we know that neither is the case. Apart from this already idealistic view, computer systems and networks pose the greatest of all security risks to our societies (from petty criminals to cyber warfare). Why then should we or would we entrust them with education?

The vision presented is that computers can improve attainment, progression and the educational experience of students, thus leading to better grades and retention rates. This is naive at best! Computer-led education too is human managed – by humans that are no pedagogues but algorithmic programmers, statisticians, and profit makers, mostly ignorant of human psychology and development. The implication here is that human teachers wouldn’t know how to improve attainment, progression and experience, but that’s wrong, for it is the human collective (aka. education system) that sets these quality indicators of what it means to be successful. It is determined by micro-to-macro economic thinking of the labour market, politics, parents’ ambitions, value systems, social needs, and so forth. Not so long ago, humanistic education and educational selection was seen as the quality benchmark, now it is market education (employability, entrepreneurship, civic compliance) and massiveness that characterises it. Can machines set these values? Should they?

Another dystopia is promoting services connected to the idea “all watched over by machines of loving grace”, where systems survey your every move and then bug you with sending messages with “can I help you…?” This is followed by the thought that machines would judge work efforts and “intelligence” of students and then again bugs them with support spam if they perform under their algorithmic prediction. The mention of “objective criteria” makes me laugh in this context, in a world where fake news and post-truth knowledge dominate the headlines every day. The age of objectivism is long over.

A criticism against humans expressed by Palmer is that they are sluggish in changing and cling to ways things have been done in the past. I disagree with such a prejudice. Human history has always been polarised between progress and conservation. We tend to cling to the “known” because routines of the familiar help us be efficient in terms of our brain power and energy consumption. Innovation is always connected to risk assessment, but it isn’t fair that there hasn’t been evolution beyond the natural. And, after all, we invented machines to change production processes (like the mentioned looms).

Comparing the guidance of a young student with driving a car in complexity actually answers itself – a car is just another machine with simple mechanical responses. A student embedded in society is a complex system that has more to do with chaos theory then with algorithms. True, computers don’t turn up hung over or stressed, but they do crash frequently, and network failures cause entire workplaces to stand still. What’s better? Well, can you communicate with a crashed computer? And, turning it around, with a hung over student I can still communicate on some level – even if it is just to buy him time to recover! Can a computer do the same and understand why he does it?

Here’s a good thing about humans that’s not in the paper: We can think flexible and context specific, whence we can accommodate special needs and wishes. Compare this to the experience in a wifi-governed ordering system in a restaurant (when asking for rice instead of fries) or to a computer till at the supermarket – will it tell you that you can get two for one if the algorithm is dominated by maximising profits for the company? Will it send you to the shop opposite because they have a better offer? Such things happen every day between humans. They are not regular programmed events, they are social interactions.

Yes, technologies will improve. Yes, there is a danger that machines will replace human teachers (at least in certain functionalities), but it is preposterous to anticipate that this will lead to a better world!

{lang: 'en-GB'}