Personalisation is often hailed as a remedy for the “one-size-fits-all” teaching approach. The idea of personalised learning is tightly connected to technology, because it is generally accepted that human resources are limited and not scalable into a one-on-one teaching ratio. Of course, the semantics involved in technology enabled personalisation differs completely from human-to-human personal interactions. In technical terms, it translates into behaviour adaptation to facilitate human computer interaction (such as adhering to technical interoperability standards) or to computer driven decision making (as in “smart” or “intelligent” tools). While this has its merits perhaps in terms of efficiency of learning, it is a galaxy apart from human personalisation, which is based on things like boundary negotiations, respect, or interpersonal “chemistry”. It remains to be seen how the idea of “personalisation” can develop without sacrificing human flexibility and societal congruence. Here are four oft encountered myths around personalisation:

(1) Personalisation is scalable

It is difficult to believe that technology can somehow better serve the individual than a human teacher. Yes, it can serve more people at the same time, but this doesn’t necessarily fit all people on a personal level. A case in hand are MOOCs: large (massive) participation numbers, served by technology dishing out educational resources. Do the learners feel personalised? Probably not as the high drop out rates would suggest or the recent introduction of “flesh-and-blood teachers” by MIT. Maybe MOOCs are scalable but aren’t a good example for personalisation apart from allowing time/space/pace flexibility. However, in general, we can question whether industrialised personalisation or the mass-production of individual learning will ever work.

(2) Personalisation makes better learners

Learning isn’t driven by intrinsic virtues alone. One of the key learning theories, Vygotski’s zone of proximal development, argues strongly for how humans can excel with the help of others. It’s pushing the boundaries that makes them better learners. Personalisation in the sense of letting everyone learn what they would naturally and intrinsically learn has been tried in schooling experiments for quite some time with rather poor results. Some good things, like serendipitous learning will only happen if there are external stimuli. But also corporate knowledge and services could not be upheld if learning was completely individualised. Furthermore, personalised learning doesn’t normally include “learning to learn” components.

Putting the individual in the foreground maybe a nice line to present in technology enhanced learning, but often misses the socialisation aspects of learning that are required for forming a coherently educated democratic society. Human interactions with computer agents will not lead to better citizens since it neglects that aspect of socialisation (not to be confused with social, as in “social networks”). Socialisation involves the development of competences such as tolerance, respect, politeness, agreement, group behaviour, team spirit etc. Computer agents, on the other hand, are driven by mediocrity, algorithms and rules that are non-negotiable. You cannot argue with an “intelligent” machine how to come to a suitable compromise.

(3) Personalisation makes society better and more equal

Personalising the experience of individual learners does not make learning more relevant to them. As we see in many instances like personalised search engines, it leads to more isolation instead of more congruence with others. This leads away from the commons and the common good. It is comparable to mass producing Randian heros of selfish desires, hence I cannot see a benefit for society or equal opportunities.

(4) Abolishing marks makes learning more personal

Learning without pressure and comparison is a noble idea, but contradicts human nature. We are social animals and live with interacting and counteracting other parts of our environment. Gaming theory tells us that in among the oldest parts of our brains it is genetically hard-coded to compete with others, against time, or even with ourselves. We humans need position. We need to know how we compare to others. Others too need to know how we compare to others. Taking school grades away will not make learning more personal in the sense of more self-directed and to your own devices. External pressure is sometimes needed to grow into a challenge.

Even if technical support for personal learning needs would work, we have to ask where this might lead us. Our societies are based on some commonly agreed upon educational standards, such as levels or qualifications reached, or the grading system. It is not to defend these structures, but if we abolish them or change them, something else would have to take their place. Society needs a standardised educational currency to distinguish expertise from pretense. Competence levels and badges are alternative approaches, welcome in their concept, reach and effect, but yet another educational standard structure.

{lang: 'en-GB'}

images

This is an interesting thought: Tore Hoel and Weiquin Chen in their paper for the International Conference on Computers in Education (ICCE 2016) suggest that the forthcoming European data protection regulation (GDPR), which is to be legally implemented in all member states by 2018, actually may drive pedagogy!

As unlikely as this may sound, I think they got a point. The core of the GDPR is about minimisation of data and use limitation. This restricts data collection to specified purposes and prevents re-purposing. It puts a bar on random collection of users’ digital footprints and sharing (selling) them for other – not clearly declared – purposes. This restriction to minimisation and specific use in turn will (perhaps) lead to more focus on the core selling point, i.e. pedagogic application of analytics.

I have previously articulated my concerns that most institutions intending to use LA applications will have to rely on third parties, where, at present, it isn’t obvious that they comply with Privacy by Design and by Default principles as demanded. Additionally to making their case to the educational customers about protecting the data of learners and teachers, there are now more pressures on them to provide tools and services that actually improve learning, not the revenue in advertising or data sharing. So, yes, I am optimistic that Tore and Weiquin are right in saying that this presents “an opportunity to make the design of data protection features in LA systems more driven by pedagogical considerations”!

{lang: 'en-GB'}

I am not sure whether the worrying developments in HE play into the hands of those people advocating disruptive change or the idea of abolishing the HE system altogether. As you can read here below, I am not one of them, as I believe education to be in the common public interest and a matter for society (i.e. the state), not for profit sharks. Still, I note a cumulative deterioration of system components that are driven by the implementation of commercial models in HE institutions.

Direct competition between institutions has been introduced decades ago leading to established market thinking, business cases and student “customers”. However, more recently, the university system developed into a luxury brand, for those who can afford it. The state slowly withdrew itself from the scene via severe cuts and austerities, or, on the student side, dramatically rising fees and costs – with less and less support from the government.

At the same time, the government eyed at private providers, so-called “challenger” institutions to compete with the public sector (and perhaps to later replace it). Very little is known about these private providers according to HESA, which leads to a messy market with bogus degree awarding entities. Some 220 such unauthorised providers were identified over the last 5 years, 80% of them had been closed. This must mean that the cost of patrolling the sector must have exploded too. Judging by the tremendous “success” the rail privatisation had on their customers, it is foreseeable that HE will go down a similar path, only with an even more dramatic knock-on effect on the labour market.

If someone now shrugs their shoulder and says “so what”, I can briefly summarise what we lost in these and similar developments: Gone are free for all studies (in previous days universities were open to everyone!), gone are maintenance grants, and good earnings for post-grads – this spells the end of the widening access agenda and the equal opportunities policy. Long gone, of course, are the days of humanitarian no-profit studies like philosophy or numismatics, Ancient Greek, etc. once departments that could not generate money to make up for the loss in government finances were closed.

The question for the future is whether the reductionist approach to higher education, which will inevitably lead to smaller numbers of academics (and institutions), will in fact lead to a rise in value of pre-university degrees like A-levels and apprenticeships.

{lang: 'en-GB'}

If you are like me on different scholarly social networks simultaneously, you probably asked yourself this same question: why do my analytics diverge so greatly between these platforms?

I have one article that has been cited 208 times on Google Scholar, 106 times on ResearchGate, and only 7 times on Academia.edu. Another more recent one shows a different distribution with only 1 citation on Google Scholar, 0 on ResearchGate, but 17 on Academia.edu.

2016-08-09_133920

There are several possible reasons for this. Firstly, although I cannot exactly remember, I might have uploaded the papers at different times to different platforms so there may be a time lapse issue involved. Secondly, my social networks (following and followers) in these platforms vary, despite a large overlap. Thirdly, the platform audience might differ, as some might be more prominent in one country or language than in another. Fourthly, however, and that’s my point for writing this post, the analytics involved in each tool vary and send out different messages, as I have contemplated in this other post.

The remaining question is what to do with this “information”? Shall I go and add all the sums together? Or are they counting the same citation in every tool? Should I go and boost my profile in the underrepresented platform by filling in more of my personal data, metadata interests, or follow even more people? Shall I perhaps start a marketing campain by spamming people with e-mail links to my articles on platform X? As long as I do not know how the figures are compiled, the system remains a biased blackbox that I can take on surface value. It may even use my figures for some other purpose than merely telling me how popular a scholar I am. Let’s not forget that the providers are in fact competitors in another world, so for them to reassure me that in their platform I get more citations than in the other, they are doing themselves a favour. I myself only have the option to decide which of the figures and platforms I trust and which ones I don’t.

{lang: 'en-GB'}

If you build them yourself, learning analytics tools can do what you expect them to. That is the idealised scenario in the learning analytics community, meaning that in order to get valuable insight and foresight from your learners’ data, one should start with a proper learning analytics design! This includes what data will be collected, how data will be cleaned, what the relevant indicators and weightings are, and how data will be processed using appropriate and tested algorithms.

AAEAAQAAAAAAAAXuAAAAJDA4Njg3ODliLWVjYjQtNGM1MS04YTkzLTM0YTcwODBiNDY5NA

However, more often than not, learning analytics is conducted via third party tools, such as VLE platforms, twitter youtube or facebook APIs, or separately sold tools. These tools are intransparent and sometimes open to changes out of control or even visibility of the user. Using built in analytics tools from third party software requires caution in its interpretation, for the algorithms may be biased toward some purpose other than achieving better understanding of learner behaviours.

Naturally, we cannot assume that every institution will build their own well designed learning analytics environment, and even if they did, modern networked learning using cloud-based services will always limit its scope. I do think, however, that transparency of the underlying engines is important, and that, just like with the terms of service, notification of changes to the algorithms would give a more transparent experience and thus higher validity for learning analytics.

 

{lang: 'en-GB'}

1984

There is too much information in the Information Society! Un-vetted information that is. The readiness of available information leads to circular confirmation of misinformation or misinterpretation of so-called “facts”. There are a number of indicators for this situation:

  • information overload: people exposed to too many news sources suffer from anxiety of (a) missing something (like in a facebook news stream), (b) trusting the source, (c) trusting their own capability of evaluating information to sift out misinformation. It’s connected to the paradox of choice.
  • news loops: news publishers, especially on the internet, are challenged to provide up-to-the-minute news, which leads them to neglect their own analysis and research and instead copy-paste from press agencies. This is why news in all news outlets are to 80-90% identical – including their “own” opinion. Or have you not wondered why some geographic areas suddenly disappear from all news channels? It’s news going round in circles. China’s regulator even went so far as to decree the verification of news stories.
  • social media: up-to-the-minute info by news publishers nowadays references and takes for true postings on social media channels like twitter or facebook. The assumption seems to be that if many people (only the ones connected to twitter and facebook) express a strong feeling about something – then this must be a valid quantitative measure of satisfaction on political and other issues. However, as the run-up to the Brexit votum showed, manipulation and propaganda on social media is on the increase.

This kind of information society does not lead to more self-determination by individuals nor does it empower the powerless. It’s steering rapidly to a 1984 scenario where people are no longer able to distinguish truth from make-believe.

{lang: 'en-GB'}

This is sad news, as an e-mail notification from Santa Fe institute reached me this week saying:

“Complexity Explorer has been supported by a grant to the Santa Fe Institute from the John Templeton Foundation.  This funding is nearing its end, and in order to continue supporting our online education program, we will be changing how a few things run on the site.  Until now all of our courses have been completely open and free.  To offer you these courses, maintain the website, add new functionality, and create new courses and tutorials, we need to raise a quarter of a million dollars a year.  The Santa Fe Institute is committed to bringing complexity education to the world through the Complexity Explorer, but grant funding and donations alone cannot sustain us indefinitely.  We have a number of different funding avenues we are pursuing, one of which will be modeled with our Introduction to Agent-based modeling course. You may have noticed a lock on the course logo.  This lock is an indicator that the course session will be a paid session”

I very much liked their course! It was a true MOOC – open for anyone and everyone, free, and high quality. I regret the circumstances that led them to start charging, at least for some parts. But, it doesn’t come as much of a surprise that free open courses aren’t free for those who offer them. We have seen this many times before with MIT’s Open Courseware, or the Open University’s OpenLearn – both heavily funded by foundations with lots of money. In my previous university, we had to abandon the “good cause” in 2007 due to the adverse economic conditions associated with free open course provisions.

{lang: 'en-GB'}

I am always wary when it comes to hyping a new technology. As the recent LAK16 global conference has hinted at, Learning Analytics may just have reached the height of the Gardener hype cycle.

Gartner_Hype_Cycle.svg

Sure, Learning Analytics has its promises to create new insights into learning and a new basis for learner self-reflection or support services. But it is dangerous to expect it to produce “truth about learning”! A forthcoming paper I recently reviewed covers the promising influence LA has on the Learning Sciences and rightly demands that more learning theories should be put at the basis for LA, but, as Paul Kirschner expressed in his keynote presentation, there are many types of learning and often in LA research and development they are simplified and generalised.

To correctly ground our expectations in some sort of reality, we only need to look at areas where data analysis and predictions have long been used to “tell the truth” and to predict the future in order to take appropriate measures: politics, economics, and the weather forecast. Since it is without human unpredictability, the weather forecast has become the most accurate of the data-heavy sciences, yet, even there, the long term predictions still carry a strong element of randomness and guesswork. Do we want to risk the future of students’ lives by basing them on 75% probabilities?

Even where there is higher accuracy, the question may be raised about algorithmic accountability. Who will be held responsible and how can anyone make a claim against a failed prediction. This risk isn’t as present in the commercial world, where an inaccurate shopping suggestion through targeted advertisements can simply be ignored, but in education careers are at stake. From a managerial perspective, while it is scientifically fabulous to have a 75-80% accuracy in predictions of highly specific drop-out scenarios, there is a cost-benefit issue related to this. To simply propose that system alerts should trigger teachers’ attention on particular students, and student support services then need to call up that particular student (which they may actually like as much as a phone call from the bank selling new services) doesn’t cut it. As a cheaper alternative, I ,sarcastically, suggested to use a random algorithm to pick a student for receiving special attention that week.

It is also worth contemplating in how much predictions about the success of learners may become self-fulfilling prophesies. Learning Analytics predictions are typically based on a number of assumptions forming the “student model”. One big assumption is that of a stable teaching/learning environment. If everything runs linear and “on rails” then it is relatively easy to say that the learning train departing from station A will eventually reach station B. However, it’s nowadays well recognised that learning is situated and human teachers are didactically and psychologically influencing the adaptivity of the learning environment. It would, in my mind, require much higher levels of intelligence for algorithms to achieve the same support as human teachers, but if it did, what would then become of our teachers? What would be the role of human teachers if LA and AI take over decision making? What qualities would they need to possess or could they just be obsolete?

We cannot neglect the human social factor in other ways too: quantifying people inevitably installs a ranking system. While a leaderboard scheme based on LA data could be on the one hand a motivating tool for some students (as is the case in serious and other games), it could also lead to apathy in others when they realise they’ll never get to the top. The trouble is that people are being metatagged by analytics and these labels are very difficult to change. They also may exercise a reverse influence on the learner in that such labels become sticky parts of their personality or digital identity.

As so often with innovative approaches, hypes and new technologies, the benefit of Learning Analytics may not lie in what the analytics actually do or how accurate they are, but in a “side-effect” that is somewhat unexpected. I see part of the promise of learning analytics in starting a discussion on how we take decisions.

{lang: 'en-GB'}

515231922_1280x720

It is one of those statistically proven facts that young people from higher educated family backgrounds are more likely to get higher educated than their peers from lower educated families. Having parents with a university degree provides students with a greater chance to succeed in HE themselves, perhaps reaching even higher levels. Such facts and figures have been used in international comparisons like the OECD’s Education at a Glance, but also in national strategies targeting the lower social classes in order to widen participation.

I would like to reflect on this so-called fact, though, because it assumes a very stable idea of what ‘family’ means. It mirrors and purports a society perhaps of the generation before the sexual revolution in the late 1960s. Today, in an era where around half of the official marriages break up and single-parenthood has become a more frequent situation than biological families, this assumption should at least be challenged. How temporary and patch-work parenthood actually influences educational participation and success, is a question that has not reached the statisticians yet.

As the demography of students changes, where 60% of students (in Austria) have some kind of job besides their studies, and, where lifelong learning raises the average student age, I see many situations that influence HE participation more than pure ancestry.

Leaving the financial aspects aside, participation and success would need to be measured against compatibility with respective cohabitant folks at home rather than biological parents. Having a learner-friendly environment is critical for deciding to study in the first instance, but also for persisting over a longer period of time. While women may find it relatively easy to tell their friends they’ll go to sign up for a course, men find it considerably more difficult to talk about such a move to their friends, especially in lower educated environments. On the other hand, the home acceptance rate of women with low educated partners to go into FE or HE can be considered as a direct barrier, and many could be actively discouraged and prevented from doing so. All in all, the parent factor while still present in the figures may be of lower importance than the current generation experiences.

{lang: 'en-GB'}

Taxonomic_Rank_Graph.svg

I always have hesitations about putting people in boxes. Although well-intended to support participation, the widening access agenda for HE supported and promoted this type of thinking. In order to help underrepresented social groups, measures were taken to support women, migrants, the disabled, people from rural backgrounds or poorer neighbourhoods, etc. The remedies were aimed at these identified and defined social categories of deprived people. At the same time, this categorisation stigmatised entire social classes and helped discrimination to be adhesive along the lines of “box”-values through the inherent and inevitable generalisations “disabled people/women/black people/migrants are…”.

It is important to note that any person can pass through several deprived categories during the course of their studies: a student may start as a single young woman, then get married, then become a single parent, having a part-time job, and so on… Of course, anyone breaking a leg while skiing can be temporarily disabled. So the people-to-categories fit isn’t necessarily generally applicable.

The flip side is that measures to improve the situation of one category of people may also benefit others: a disabled ramp can be used by moms pushing prams or elderly ladies with shopping trolleys.

There is, however, an alternative to people categories! Anti-categorisation starts not with the person, but with the context and situation a/any person can find themselves in. The “special needs” concept comes closer to this than the category “disabled”. Defining scenarios that require support measures of one sort or another, goes a long way to more personalisation of student support and hence providing more adequate help to those who need it.

{lang: 'en-GB'}

Next Page »