I am always wary when it comes to hyping a new technology. As the recent LAK16 global conference has hinted at, Learning Analytics may just have reached the height of the Gardener hype cycle.

Gartner_Hype_Cycle.svg

Sure, Learning Analytics has its promises to create new insights into learning and a new basis for learner self-reflection or support services. But it is dangerous to expect it to produce “truth about learning”! A forthcoming paper I recently reviewed covers the promising influence LA has on the Learning Sciences and rightly demands that more learning theories should be put at the basis for LA, but, as Paul Kirschner expressed in his keynote presentation, there are many types of learning and often in LA research and development they are simplified and generalised.

To correctly ground our expectations in some sort of reality, we only need to look at areas where data analysis and predictions have long been used to “tell the truth” and to predict the future in order to take appropriate measures: politics, economics, and the weather forecast. Since it is without human unpredictability, the weather forecast has become the most accurate of the data-heavy sciences, yet, even there, the long term predictions still carry a strong element of randomness and guesswork. Do we want to risk the future of students’ lives by basing them on 75% probabilities?

Even where there is higher accuracy, the question may be raised about algorithmic accountability. Who will be held responsible and how can anyone make a claim against a failed prediction. This risk isn’t as present in the commercial world, where an inaccurate shopping suggestion through targeted advertisements can simply be ignored, but in education careers are at stake. From a managerial perspective, while it is scientifically fabulous to have a 75-80% accuracy in predictions of highly specific drop-out scenarios, there is a cost-benefit issue related to this. To simply propose that system alerts should trigger teachers’ attention on particular students, and student support services then need to call up that particular student (which they may actually like as much as a phone call from the bank selling new services) doesn’t cut it. As a cheaper alternative, I ,sarcastically, suggested to use a random algorithm to pick a student for receiving special attention that week.

It is also worth contemplating in how much predictions about the success of learners may become self-fulfilling prophesies. Learning Analytics predictions are typically based on a number of assumptions forming the “student model”. One big assumption is that of a stable teaching/learning environment. If everything runs linear and “on rails” then it is relatively easy to say that the learning train departing from station A will eventually reach station B. However, it’s nowadays well recognised that learning is situated and human teachers are didactically and psychologically influencing the adaptivity of the learning environment. It would, in my mind, require much higher levels of intelligence for algorithms to achieve the same support as human teachers, but if it did, what would then become of our teachers? What would be the role of human teachers if LA and AI take over decision making? What qualities would they need to possess or could they just be obsolete?

We cannot neglect the human social factor in other ways too: quantifying people inevitably installs a ranking system. While a leaderboard scheme based on LA data could be on the one hand a motivating tool for some students (as is the case in serious and other games), it could also lead to apathy in others when they realise they’ll never get to the top. The trouble is that people are being metatagged by analytics and these labels are very difficult to change. They also may exercise a reverse influence on the learner in that such labels become sticky parts of their personality or digital identity.

As so often with innovative approaches, hypes and new technologies, the benefit of Learning Analytics may not lie in what the analytics actually do or how accurate they are, but in a “side-effect” that is somewhat unexpected. I see part of the promise of learning analytics in starting a discussion on how we take decisions.

{lang: 'en-GB'}

515231922_1280x720

It is one of those statistically proven facts that young people from higher educated family backgrounds are more likely to get higher educated than their peers from lower educated families. Having parents with a university degree provides students with a greater chance to succeed in HE themselves, perhaps reaching even higher levels. Such facts and figures have been used in international comparisons like the OECD’s Education at a Glance, but also in national strategies targeting the lower social classes in order to widen participation.

I would like to reflect on this so-called fact, though, because it assumes a very stable idea of what ‘family’ means. It mirrors and purports a society perhaps of the generation before the sexual revolution in the late 1960s. Today, in an era where around half of the official marriages break up and single-parenthood has become a more frequent situation than biological families, this assumption should at least be challenged. How temporary and patch-work parenthood actually influences educational participation and success, is a question that has not reached the statisticians yet.

As the demography of students changes, where 60% of students (in Austria) have some kind of job besides their studies, and, where lifelong learning raises the average student age, I see many situations that influence HE participation more than pure ancestry.

Leaving the financial aspects aside, participation and success would need to be measured against compatibility with respective cohabitant folks at home rather than biological parents. Having a learner-friendly environment is critical for deciding to study in the first instance, but also for persisting over a longer period of time. While women may find it relatively easy to tell their friends they’ll go to sign up for a course, men find it considerably more difficult to talk about such a move to their friends, especially in lower educated environments. On the other hand, the home acceptance rate of women with low educated partners to go into FE or HE can be considered as a direct barrier, and many could be actively discouraged and prevented from doing so. All in all, the parent factor while still present in the figures may be of lower importance than the current generation experiences.

{lang: 'en-GB'}

Taxonomic_Rank_Graph.svg

I always have hesitations about putting people in boxes. Although well-intended to support participation, the widening access agenda for HE supported and promoted this type of thinking. In order to help underrepresented social groups, measures were taken to support women, migrants, the disabled, people from rural backgrounds or poorer neighbourhoods, etc. The remedies were aimed at these identified and defined social categories of deprived people. At the same time, this categorisation stigmatised entire social classes and helped discrimination to be adhesive along the lines of “box”-values through the inherent and inevitable generalisations “disabled people/women/black people/migrants are…”.

It is important to note that any person can pass through several deprived categories during the course of their studies: a student may start as a single young woman, then get married, then become a single parent, having a part-time job, and so on… Of course, anyone breaking a leg while skiing can be temporarily disabled. So the people-to-categories fit isn’t necessarily generally applicable.

The flip side is that measures to improve the situation of one category of people may also benefit others: a disabled ramp can be used by moms pushing prams or elderly ladies with shopping trolleys.

There is, however, an alternative to people categories! Anti-categorisation starts not with the person, but with the context and situation a/any person can find themselves in. The “special needs” concept comes closer to this than the category “disabled”. Defining scenarios that require support measures of one sort or another, goes a long way to more personalisation of student support and hence providing more adequate help to those who need it.

{lang: 'en-GB'}

images

This article may have serious ethical debates on its heels. Apparently, scientists succeeded in boosting or erasing individual memories of mice. As always, they tell us it is for our better future and for research into dementia and PTSD (post-traumatic stress disorder). Quite likely also a remedy against Altzheimer’s disease.

Being slightly foresighted, I see further potential in the entertainment industry when it’s claimed that it would be possible to enhance pleasant memories!

But what would this mean for learning? Once we are able to erase or boost individual memories as it pleases [others], we factually destroy the process of learning and knowledge acquisition. Imagine what this does to “critical thinking” and you’ll see the ethical nightmare arising from it. Since our identities are shaped by our experiences – good and bad ones – meddling with memories will change us in what and who we are. Brainwashing has always been the desire of regimes that want “simple” and obedient people to rule over.

{lang: 'en-GB'}

BMW_Leipzig_MEDIA_050719_Download_Karosseriebau_max

The EU has in it’s recent communications and funding programmes made it clear that creativity and entrepreneurship are critical competences that the education systems of the member states need to develop and focus on. It doesn’t confine itself to formal education but also to lifelong learning contexts and is embedded in several calls and vision papers by the Commission.

So why are these two rather opposite skill sets so important?

I see two main reasons for this emphasis: Firstly, entrepreneurial skills are the basis for self-employment, and this gets people off the unemployment records and the respective figures down. Figures that have been steadily growing over the past decades due mainly to de-industrialisation of developed countries, and automation of the service sector, as I articulated in this post. Self-employed people do not show up in unemployment figures, therefore, it’s simply convenient to increase their numbers however successful or not they are. Creativity, of course, is the driver of innovation and it is anticipated that creative entrepreneurs are more successful, hence boosting the economy and labour market.

Secondly, creativity and entrepreneurial risk taking are skills that are to date not being conducted by machines, which can already do most other human tasks – faster and cheaper. So, these two domains (soon perhaps the last ones of human superiority) cannot be automated yet to the extent where they replace people in a workplace. One is tempted to ask, though, how artificial intelligence will claim itself into these two sectors.

{lang: 'en-GB'}

I am (positively) surprised at the level of critical self-reflection that’s happening at this year’s Learning Analytics conference (see the twitter stream #LAK16). Even the keynotes by Mireille Hildebrandt and Paul Kirschner questioned the validity and acceptability of using big data in education and highlighted potential dangers. The audience too shared these mixed feelings with questions like: “why should people (e.g. parents) sign up to this? What’s the promise?”

The two critical themes that emerged aren’t technical. They are about the ethical constraints put on the use of personal data and the validity and use for learning. Both these “soft” issues are present in our design framework for LA. The ethical and privacy concerns and how to perhaps deal with them, are discussed in our LAK16 presentation and full paper.

I see this as part of a maturing process of the community. Being enthusiastic about LA is one thing, being aware of the pitfalls and limitations is another. After all, should it turn out to be a dead horse, there is no point in flogging it. On the other hand, if there are benefits that outweigh the counter arguments, then, by all means, we need to have answers. Doing analytics just because we can isn’t a purpose or a justification.

{lang: 'en-GB'}
For some time now, universities have started calling students “customers” and charged them ever rising tuition fees. It seems this message has finally sunk in with them and turning the relationship between students and their institutions on its head in that students are now beginning to see payment of fees as a contract to obtain a qualification in exchange for money.
With accelerating costs to study, students are no longer taking silently whatever is given to them. The marketing machine of modern HE promising excellent services and highest quality studies is being scrutinised and carries the danger for HEIs of being challenged by unsatisfied customers who don’t feel they are receiving value for money. The consequences of this change in attitude can be seen, for example, in the case where a Swedish University College is being sued by a US student whose course did not match the level of quality promised.
I already previously noticed that especially mature students were very wary of how they were serviced on a course. There was a sincere dislike for peer tutoring, peer assessment, flipped classrooms and other innovative models of teaching. They saw their payment as an entitlement for being taught by an “expert” teacher not by other novices! Front-up lectures was what they felt they paid for, and it was quite difficult to change such expectations and to open them up for modern teaching/learning practices.
Following this research report, the THES summarises that “Universities are misleading prospective students by deploying selective data, flattering comparisons and even outright falsehoods in their undergraduate prospectuses”. The Guardian adds “that the prospectus belongs to the “tourist brochure genre”, but that young people don’t always realise that”.
Another possible legal battleground may involve implementations of learning analytics. It is quite conceivable that students before long may sue their university for not acting on data and information the institution holds about them. Universities have a fiduciary duty towards students and their learning achievements. Improved learning information systems and data integration has the potential for ringing alarm bells before a student drops out of a course. At least that is the (sometimes exaggerated) expectation some learning analytics proponents hold. Customers failing will now perhaps claim that the institution did know about their potential failure, but did not act on it.
{lang: 'en-GB'}

There have been a number of recent set-backs in Learning Analytics implementations, among them the closure of the high profile inBloom venture in the US. The cause of this is increased wariness of users about their privacy. While most people enjoy the comfort of amazon’s intelligent product recommendations or facebook’s friends suggestions, people care where their data goes and what happens with it.

Together with my friend and long-time colleague Hendrik Drachsler, we did a study into the fears and hesitations of learners or their guardians about Learning Analytics. We will present these findings at the LAK16 conference in Edinburgh later in April 2016, but our main findings tell us that there is a sincere confusion between the commercial world and the academic world. Educational institutions have a much longer tradition in upholding ethics in research and keeping data private. However, the random collection of personal data, the selling on of that data to third parties, and the repurposing of datasets – all of which happens outside user control! – done by the for-profit commercial data giants Google, Facebook, Amazon, et al. cast their shadows on the mostly benevolent attempts by education establishments, who see it as part of their fiduciary duty to provide intelligence gathered from learning data to students and teachers.

To tackle this issue, we engage in a quest for what we call “Trusted Learning Analytics“. This takes note of the fact that there can be no technical solution to this, nor should we rely on legal changes to “make things possible”. Our proposal to build trust in learning analytics relies mainly on openness, transparency, consent and user empowerment. As part of the LACE (Learning Analytics Community Exchange) project, we developed a guide called the DELICATE checklist – derived from a series of in-depth expert workshops – to help managers in the implementation of LA. You can also find the reference to the full article below the image (click to enlarge).

delicate-checklist-to-establish-trusted-learning-analytics-1-1024

The eight points are [It can be downloaded here LINK]:
1. D-etermination: Decide on the purpose of learning analytics for your institution.
2. E-xplain: Define the scope of data collection and usage.
3. L-egitimate: Explain how you operate within the legal frameworks, refer to the essential legislation.
4. I-nvolve: Talk to stakeholders and give assurances about the data distribution and use.
5. C-onsent: Seek consent through clear consent questions.
6. A-nonymise: De-identify individuals as much as possible
7. T-echnical aspects: Monitor who has access to data, especially in areas with high staff turn-over.
8. E-xternal partners: Make sure externals provide highest data security standards.

The DELICATE checklist shows ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and their institution. The core message is simple really: When you implement Learning Analytics – be open about it!

{lang: 'en-GB'}

At the BETT show 2016 in London three technologies caught my eye. They weren’t exactly new, but had reached a level beyond pure experimentation:

  • learning analytics
  • beacons
  • microbits

The EU funded LACE project (Learning Analytics Community Exchange) presented a few times. I listened to the session at the secondary school podium, where Dutch and Swedish school networks talked about the implementation into their schooling system. It seems promising activities are going on there, despite privacy and ethical concerns held by some stakeholders.

2016-01-21_00009 2016-01-21_00010

Beacons is an indoor location technology that can be used to prompt passersby with helpful information. In commerce, for example, it will tell people about nearby promotional offers. In museums it can push notification about an object to visitors. Nice is that the beacons are weatherproof, so can be used outdoors too. They use low energy bluetooth, so can last some time on one charge.

Microbits reminded me very much of the rasperry pi, which also got presented in another stall at BETT. They are programmable LED chips which can be linked together and to various controlling devices, e.g. mobile phones. An impressive display of things to do with them was on show, but the main purpose, so we are told, is to teach kids programming skills. To me this is another indication that IT skills have become a core skill next to reading, writing, and arithmetic – even at a very young age.

All these bits and bobs are nice to play with, have seen interesting stages of experimentation, but now, it is time to find pedagogic applications for them to achieve some learning.

{lang: 'en-GB'}

I am reviewing papers for next year’s LAK16 conference in Edinburgh. Reading through the submissions, I realised the hype that Learning Analytics enjoys at present in the educational technology and data community and beyond. While this can be considered a positive push into an innovative direction by enthusiasts, it is partly also played as a tactical game by some. What was previously a perfectly acceptable empirical study and educational experiment, is now being re-labelled and sold as Learning Analytics. Of course, these two can have various practical and theoretical overlaps, but, at least in my mind, there are also some notable distinctive characteristics.

I saw this re-labelling happen many times before. My previous university offered so-called “master classes”, which basically were one-week online CPD courses. When the MOOC hype broke out, these webinars, quite instantly, became MOOCs and academics went around shouting out loud “yes, we do MOOCs!”

So what are the differences between traditional empirical studies and Learning Analytics. Among the characteristics are (at least in my understanding) the following:

  • Big Data instead of small samples. We are talking here about a vast pool of educational datasets, not one that is focused on a particular research question.
  • Repetition: Learning Analytics is repeatedly done over the same (or very similar) data pool and data subjects, not a one-off action. LA gives continuous feedback.
  • Automatism and algorithms: Automatic data collection paired with some processing formula that is (automatically) applied onto the dataset rather than hand-made analysis.

I know these characteristics are “quick and dirty” and perhaps neither comprehensive nor undisputable, but in order to focus the future Learning Analytics community on quality of field-related research it is necessary to clarify basic parameters in addition to the by now well-established definitions for Learning Analytics (Siemens, Fergusson, and others).

{lang: 'en-GB'}

Next Page »