This presentation by Richard Palmer at the JISC Digifest is a provocation not a promotion for learning analytics! There is so much bias in this piece that it’s hard to decide where to begin with taking it apart. It’s full of empty generalisations and predictions, so I start with issuing a WARNING – this read might damage your views on human teachers.

The argument that computers don’t come hung over, stressed or loaded with bias to work has been used many times, especially at a period in technological evolution that was led by the illusion that computers are infallible and perfectly neutral. For a long time, we know that neither is the case. Apart from this already idealistic view, computer systems and networks pose the greatest of all security risks to our societies (from petty criminals to cyber warfare). Why then should we or would we entrust them with education?

The vision presented is that computers can improve attainment, progression and the educational experience of students, thus leading to better grades and retention rates. This is naive at best! Computer-led education too is human managed – by humans that are no pedagogues but algorithmic programmers, statisticians, and profit makers, mostly ignorant of human psychology and development. The implication here is that human teachers wouldn’t know how to improve attainment, progression and experience, but that’s wrong, for it is the human collective (aka. education system) that sets these quality indicators of what it means to be successful. It is determined by micro-to-macro economic thinking of the labour market, politics, parents’ ambitions, value systems, social needs, and so forth. Not so long ago, humanistic education and educational selection was seen as the quality benchmark, now it is market education (employability, entrepreneurship, civic compliance) and massiveness that characterises it. Can machines set these values? Should they?

Another dystopia is promoting services connected to the idea “all watched over by machines of loving grace”, where systems survey your every move and then bug you with sending messages with “can I help you…?” This is followed by the thought that machines would judge work efforts and “intelligence” of students and then again bugs them with support spam if they perform under their algorithmic prediction. The mention of “objective criteria” makes me laugh in this context, in a world where fake news and post-truth knowledge dominate the headlines every day. The age of objectivism is long over.

A criticism against humans expressed by Palmer is that they are sluggish in changing and cling to ways things have been done in the past. I disagree with such a prejudice. Human history has always been polarised between progress and conservation. We tend to cling to the “known” because routines of the familiar help us be efficient in terms of our brain power and energy consumption. Innovation is always connected to risk assessment, but it isn’t fair that there hasn’t been evolution beyond the natural. And, after all, we invented machines to change production processes (like the mentioned looms).

Comparing the guidance of a young student with driving a car in complexity actually answers itself – a car is just another machine with simple mechanical responses. A student embedded in society is a complex system that has more to do with chaos theory then with algorithms. True, computers don’t turn up hung over or stressed, but they do crash frequently, and network failures cause entire workplaces to stand still. What’s better? Well, can you communicate with a crashed computer? And, turning it around, with a hung over student I can still communicate on some level – even if it is just to buy him time to recover! Can a computer do the same and understand why he does it?

Here’s a good thing about humans that’s not in the paper: We can think flexible and context specific, whence we can accommodate special needs and wishes. Compare this to the experience in a wifi-governed ordering system in a restaurant (when asking for rice instead of fries) or to a computer till at the supermarket – will it tell you that you can get two for one if the algorithm is dominated by maximising profits for the company? Will it send you to the shop opposite because they have a better offer? Such things happen every day between humans. They are not regular programmed events, they are social interactions.

Yes, technologies will improve. Yes, there is a danger that machines will replace human teachers (at least in certain functionalities), but it is preposterous to anticipate that this will lead to a better world!

{lang: 'en-GB'}

 

It’s extremely difficult to characterise different generations and generational shifts in societal attitudes. Not everyone of the parent pre-1960’s generation was a frugal family person nor is everyone of the 2000’s an entertainment junkie. Yet, there are certain signature traits in behaviours, attitudes and values that make up the mainstream and stand for an era.

This interesting piece of writing looks at the rise in supersticious beliefs and the anything-goes society. The author argues that with the dramatic changes in the 1960’s society (sexual revolution, hippies, etc.) a new mentality formed in the US that allowed for unscientific knowledge to become part of the local and national narrative. Especially, Americans became open to conspiracy theories of all sorts that still surround us today. What’s worse, it seems that these have now fully entered politics and the government, shaking the foundations of our democracy.

With Star Trek and the likes an integral part of my (European) youth, it never occured to me that most of the space sci-fi ideas emerged in the US, including UFO sightings, aliens, utopian and dystopian cartoons and documentaries, etc. This is well documented in Hollywood and TV productions from Battlestar Galactica to E.T. Europe was at that time (the 1960s and 1970s) still working on its post-WWII trauma and settled with glorifying the past (empires). There was very little if any of its movie productions that portrayed the futuristic, the out-worldish, or the supernatural.

Whether it is true or not, what the author implies, that Americans today are prone to believe whatever they want, as implausible as it may be, the internet has certainly contributed to the spread of misinformation to the credulous. Recent electoral events indicate a firm rejection of facts and science – to the extent where researchers feel the urge to protest for being sidelined.

What is equally worrying is the decline in reliability of information through the use of “social media” in journalism. I mean the references in traditional and well-established papers to social media “news” on the basis of their virality. This way, a single tweet can transform the worldview of many people – and are we really able to detect all the “fake news”, propaganda, and hoaxes? Losing reality by a variety of (technological) means, e.g. fake news, opinion manipulation via facebook, VR, dark web, entertainment, and so forth, jeopardises our  consensus-based democracy. It seems that different mob-created “worlds” emerge for us to pick and choose, or to create our own – including delusions of caliphates and other tribes.

{lang: 'en-GB'}

It sounds absurd that in the “Knowledge Society” we should struggle to defend fact-based science and research.

Here we are, all that talk about evidence-based learning, learning analytics, machine learning, big data and mining techniques. All those enormously hyped expectations that this will lead us to a better understanding of learning and learners, and, hence, to better and more customised learning opportunities. It was hailed as opening new insights not only into learning behaviours. And beyond learning, Big Data is now exploited in many ways to maximise customer experiences, space science, active citizenship, the economy, schools and universities, health, the environment, and so forth. It’s the new blueprint for an information and fact-based world, that’s sometimes too scary to think of.

Yet here we stand faced with post-truth politics where facts are worth nothing, rendering science and research useless. Fake news and the ignorance of science and its methodical approaches are killing the search for facts and figures, that drove much of the past decades towards the Knowledge Society. We have heard much about the changes in management culture away from anecdotal decision making to fact-based leadership. Well, it looks like the rumour mills are back in business.

Resistance comes from scientists and research professionals. The community is coming together and marches! There’s a bunch of activities to send out the message. But who will listen?! History tells us, that it’s no fair fight when science and facts meets rumours, propaganda, believes and outright denial. The demotion of knowledge is endangering and flattening our world… – until it becomes again a disk where we fall off either side!

{lang: 'en-GB'}

Happy were the days when righteous Hollywood heroes persisted in their search for the truth despite all obstacles and deterrents.

The reality is, of course, very different. We live in the post-truth era, where (true) information is declining in value and belief is everything!

The bizarre thing is that we have more data than ever before, yet less facts can be deducted from it with certainty. Why is this? Let’s start with the economy, where data has been long used to find the best solution to anything and investment. Economic data despite its abundance holds any answer someone searches for. Algorithms dictate the value of stocks, prices and futures. They change in milliseconds and interact with one another – they are connected. People cannot keep up with the speed of machines and have no way of understanding what triggers events in machine agents. People then resort to plausibility, trust and beliefs (such as brand value). Companies can no longer be valued in real worth or assets, only in virtual billions that could be halved the next day.

News is another information channel that has become unreliable as a source in the search for facts and “truth”. News media are profit making organisations that feed on popularity not on truthfulness. What’s popular doesn’t have to be true. Any scandal is better than the truth. News media love Trump because he provides them with popular stories. Trump loves the media because they give him publicity and he doesn’t have to admit to anything they write about him, no matter how scandalous. Post-truth holds no proof. Even if the Intelligence Service would publish the Russian link to the election hack – it could be easily refuted as fake news or dark plot against him. Secret services are not employed to serve the truth, but to serve the enacted politics of the government (and their own interests). The electorate knows this. People only believe what they want to hear to confirm their beliefs!

Even in criminal courts finding an objective truth isn’t as easy as one would expect with all the forensic tools now available. Why else would we find this proliferation of criminal and terrorist activities with comparatively little or no convictions for lack of proof? Risks for criminals or extremists of getting caught and convicted are minimal – which itself sounds contradictory to the age of Big Data, ubiquitous surveillance, and integrated systems.

Far from finding an objective truth, even subjective truths are an endangered species. In a post-truth society an almost religious belief in perceived realities has developed that finds confirmation and amplification wherever data is abundant and where sources are better connected. Proof can no longer be provided.

{lang: 'en-GB'}

There maybe finally some awakening to the tsunamis of graduates that hit the labour market in recent years. The article quite reasonably sums up what is wrong with too large numbers of people going to university. Over the past twenty-five years or so, a policy of widening access has led to what the author calls “conveyor belt” education, i.e. the automatic advance from one level of school to the next with an almost unreflected progression into higher education. This situation has been accelerated under the pressures of country rankings according to their participation rates at university.

The assumption (promoted by the likes of the OECD’s “Education at a Glance“) is that higher education brings better jobs, better earnings and more job security. This may still be true for some countries and some professions, but, nationally, it has had various negative impacts including and in addition to those mentioned in the article:

We could say, the OECD argument still holds true, just because graduates occupy jobs of non-graduates, hence, relatively speaking, they are “more secure” whether a university degree is actually needed for the job or not. At the same time, they push other school leavers into inferior positions, and this push-chain downward continues to leave the most vulnerable and least educated without jobs or in very precarious situations, while turning academics into cheap labour due to excessive supply. This leads to low-cost for human resources and to deteriorating employment conditions  (zero-hour contracts, etc.).

Another downside of the all-time high participation is that the quality of a university education drops to accommodate higher numbers of graduates, while at the same time the cost of university education rises, which is then passed on to students (and parents) who are often caught in a debt trap. It also prolongs the (unproductive) time for young people before they earn a living. In some countries, the age of young people still living at home has risen dramatically in the wake of this (notably Italy and Spain).

From a sector and vocational point of view, to many businesses a university education adds little in productivity or economically, since higher education is not and has no aim to be a vocational education for professional crafts businesses or the service sector. Too often, students attend a course that has little to do with their later employment, while vice-versa most jobs did not develop into something requiring more sophistication or intellectual capital (e.g. bank clerks).

No wonder then that universities find themselves in an identity crisis with the task of catering for the entire (upper level) labour market. The political push to university participation has, in my mind, neglected other (possible) forms of education that the economy would require. Rather than turning universities into employability factories and away from their mission as think-tanks and innovators to advance human knowledge, they would benefit from a narrower portfolio and target group. Instead of a generalist education that virtually everyone passes and that in its learning outcomes strongly resembles previous grammar schools, we need more real diversity – i.e. quality education opportunities to fulfil a specialist role in the economy, offered by different types of institutions for a variety of learners. In Austria and Germany, the “Fachhochschulen” (comparable to FE colleges) fill this part quite successfully and are flexible enough to adapt to labour market needs.

Careers development for vocational professions should no longer be neglected, but new (higher) designs outside of university are thinkable, which could also lead to more self-esteem of non-university learners, as well as presenting a lifelong learning road for high achievers in non-academic professions.

 

{lang: 'en-GB'}

There have been persistent calls for transparency and algorithmic accountability in learning analytics. Quite recently, there was a discussion at an LASI event in Denmark on that topic.

There are good arguments for more transparency in developing and delivering learning analytics products. Presumably, teachers can derive better informed interventions from visualisations of learning data when they understand what goes in, how it is weighted and processed, and what comes out.

However, the discussion also moved very much into the direction of “personalised learning analytics” with questions like “at what point of comprehension educators are happy to trust products”, or asks whether this might be achieved if “the analytics system demonstrates to your satisfaction that it is attending to the same signals that you value”.  It goes on to challenge vendors (and researchers) that “information should be available and understandable to different kinds of learning scientists and learning professionals”. Ulla Lunde Ringtved asks: “do we need a kind of product declaration and standardization rules to secure user knowledge about their systems?”

I think this is going too far, without much hope and without much value to end users. After all, we are talking about “products”, i.e. ready made things. Vendors would not and could not deliver out-of -the-box, build-your-own, tweak-the-data, customise-the-algorithmic-process learning analytics tools. And data consumers would not want it! Teachers and students are surrounded by black boxes of all kinds, including Google, Blackboard and other VLEs, Facebook, etc. There is evidence that lack of transparency has no correlation to trust. In our lives, we don’t understand most of the tools that we use: the digital camera, the electronic alarm clock, and so forth. And we don’t have to! As long as they work.

{lang: 'en-GB'}

Bildergebnis für "Shadow IT"

Here is an interesting summary of the challenges of organisational IT architectures. While in previous (now almost prehistoric) architectures the so-called Managed Learning Environment (MLE) was built with the intent of an all-integrated systems and single sign-on architecture, nowadays, Shadow IT services are booming. A lot of learning services are run in the Cloud, including the very powerful Microsoft Office 365 or mail servers. On the one hand, this external hosting is handy as it saves on a lot on internal manpower and improves the security of individual services (such as spam control, virus checks, etc.). On the other, as the article rightly expresses, it outsources the control over these services and is often bypassing the IT professionals. It often also leads to accumulation of costs in various departments where centrally managed (Cloud) services could be cheaper.

 

{lang: 'en-GB'}

Image result for student evaluation of teaching

Ha! Finally, a study that confirms what was general knowledge anyhow, among non-decision makers! Student evaluations of their teachers have no correlation to their learning. Who would have guessed?! Filling in a couple of questions at the end of term, if anything, indicates at the most popularity, not quality or progress. Male teachers seem to fare better over all, confirming a gender bias.

I particularly like this part:

“The entire notion that we could measure professors’ teaching effectiveness by simple ways such as asking students to answer a few questions about their perceptions of their course experiences, instructors’ knowledge and the like seems unrealistic given well-established findings from cognitive sciences such as strong associations between learning and individual differences including prior knowledge, intelligence, motivation and interest. Individual differences in knowledge and intelligence are likely to influence how much students learn in the same course taught by the same professor.”

and this:

“Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. … these instruments often serve mostly as easily produced publishable units or marketing tools.”

I would add to it that it’s a miserable waste of valuable staff and student time and creates an anxiety that undermines learning.

{lang: 'en-GB'}

There is much uncertainty about ethics and privacy in learning analytics which hampers wider adoption. In a recent article for LAK16, Hendrik Drachsler and I tried to show ways in which trusted learning analytics can be established compliant with existing legislation and the forthcoming General Data Protection Regulation (GDPR) by the European Commission, which will come into force in 2018. In short, four things need to be established:

  • Transparency about the purpose: Make it clear to the users what the purpose of data collection is and who will be able to access it. Let users know that data collection is limited to fulfil only the intended purpose effectively.
  • Informed consent: Get users to agree to data collection and processing, by telling them what data you are collecting, for how long data will be stored, and provide reassurance that none of the data will be open for re-purposing or use by third parties. According to the GDPR, approval can be revoked and data of individual users must then be deleted from the store – this is called “the right to be forgotten”.
  • Anonymise: Replace any identifiable personal information and make the individual not retrievable. In collective settings data can be aggregated to generate abstract metadata models.
  • Data security: Store data, ideally encrypted, in a (physically) safe server environment. Monitor regularly who has access to the data.
{lang: 'en-GB'}

Personalisation is often hailed as a remedy for the “one-size-fits-all” teaching approach. The idea of personalised learning is tightly connected to technology, because it is generally accepted that human resources are limited and not scalable into a one-on-one teaching ratio. Of course, the semantics involved in technology enabled personalisation differs completely from human-to-human personal interactions. In technical terms, it translates into behaviour adaptation to facilitate human computer interaction (such as adhering to technical interoperability standards) or to computer driven decision making (as in “smart” or “intelligent” tools). While this has its merits perhaps in terms of efficiency of learning, it is a galaxy apart from human personalisation, which is based on things like boundary negotiations, respect, or interpersonal “chemistry”. It remains to be seen how the idea of “personalisation” can develop without sacrificing human flexibility and societal congruence. Here are four oft encountered myths around personalisation:

(1) Personalisation is scalable

It is difficult to believe that technology can somehow better serve the individual than a human teacher. Yes, it can serve more people at the same time, but this doesn’t necessarily fit all people on a personal level. A case in hand are MOOCs: large (massive) participation numbers, served by technology dishing out educational resources. Do the learners feel personalised? Probably not as the high drop out rates would suggest or the recent introduction of “flesh-and-blood teachers” by MIT. Maybe MOOCs are scalable but aren’t a good example for personalisation apart from allowing time/space/pace flexibility. However, in general, we can question whether industrialised personalisation or the mass-production of individual learning will ever work.

(2) Personalisation makes better learners

Learning isn’t driven by intrinsic virtues alone. One of the key learning theories, Vygotski’s zone of proximal development, argues strongly for how humans can excel with the help of others. It’s pushing the boundaries that makes them better learners. Personalisation in the sense of letting everyone learn what they would naturally and intrinsically learn has been tried in schooling experiments for quite some time with rather poor results. Some good things, like serendipitous learning will only happen if there are external stimuli. But also corporate knowledge and services could not be upheld if learning was completely individualised. Furthermore, personalised learning doesn’t normally include “learning to learn” components.

Putting the individual in the foreground maybe a nice line to present in technology enhanced learning, but often misses the socialisation aspects of learning that are required for forming a coherently educated democratic society. Human interactions with computer agents will not lead to better citizens since it neglects that aspect of socialisation (not to be confused with social, as in “social networks”). Socialisation involves the development of competences such as tolerance, respect, politeness, agreement, group behaviour, team spirit etc. Computer agents, on the other hand, are driven by mediocrity, algorithms and rules that are non-negotiable. You cannot argue with an “intelligent” machine how to come to a suitable compromise.

(3) Personalisation makes society better and more equal

Personalising the experience of individual learners does not make learning more relevant to them. As we see in many instances like personalised search engines, it leads to more isolation instead of more congruence with others. This leads away from the commons and the common good. It is comparable to mass producing Randian heros of selfish desires, hence I cannot see a benefit for society or equal opportunities.

(4) Abolishing marks makes learning more personal

Learning without pressure and comparison is a noble idea, but contradicts human nature. We are social animals and live with interacting and counteracting other parts of our environment. Gaming theory tells us that in among the oldest parts of our brains it is genetically hard-coded to compete with others, against time, or even with ourselves. We humans need position. We need to know how we compare to others. Others too need to know how we compare to others. Taking school grades away will not make learning more personal in the sense of more self-directed and to your own devices. External pressure is sometimes needed to grow into a challenge.

Even if technical support for personal learning needs would work, we have to ask where this might lead us. Our societies are based on some commonly agreed upon educational standards, such as levels or qualifications reached, or the grading system. It is not to defend these structures, but if we abolish them or change them, something else would have to take their place. Society needs a standardised educational currency to distinguish expertise from pretense. Competence levels and badges are alternative approaches, welcome in their concept, reach and effect, but yet another educational standard structure.

{lang: 'en-GB'}

Next Page »