There have been persistent calls for transparency and algorithmic accountability in learning analytics. Quite recently, there was a discussion at an LASI event in Denmark on that topic.

There are good arguments for more transparency in developing and delivering learning analytics products. Presumably, teachers can derive better informed interventions from visualisations of learning data when they understand what goes in, how it is weighted and processed, and what comes out.

However, the discussion also moved very much into the direction of “personalised learning analytics” with questions like “at what point of comprehension educators are happy to trust products”, or asks whether this might be achieved if “the analytics system demonstrates to your satisfaction that it is attending to the same signals that you value”.  It goes on to challenge vendors (and researchers) that “information should be available and understandable to different kinds of learning scientists and learning professionals”. Ulla Lunde Ringtved asks: “do we need a kind of product declaration and standardization rules to secure user knowledge about their systems?”

I think this is going too far, without much hope and without much value to end users. After all, we are talking about “products”, i.e. ready made things. Vendors would not and could not deliver out-of -the-box, build-your-own, tweak-the-data, customise-the-algorithmic-process learning analytics tools. And data consumers would not want it! Teachers and students are surrounded by black boxes of all kinds, including Google, Blackboard and other VLEs, Facebook, etc. There is evidence that lack of transparency has no correlation to trust. In our lives, we don’t understand most of the tools that we use: the digital camera, the electronic alarm clock, and so forth. And we don’t have to! As long as they work.

{lang: 'en-GB'}

Bildergebnis für "Shadow IT"

Here is an interesting summary of the challenges of organisational IT architectures. While in previous (now almost prehistoric) architectures the so-called Managed Learning Environment (MLE) was built with the intent of an all-integrated systems and single sign-on architecture, nowadays, Shadow IT services are booming. A lot of learning services are run in the Cloud, including the very powerful Microsoft Office 365 or mail servers. On the one hand, this external hosting is handy as it saves on a lot on internal manpower and improves the security of individual services (such as spam control, virus checks, etc.). On the other, as the article rightly expresses, it outsources the control over these services and is often bypassing the IT professionals. It often also leads to accumulation of costs in various departments where centrally managed (Cloud) services could be cheaper.

 

{lang: 'en-GB'}

Image result for student evaluation of teaching

Ha! Finally, a study that confirms what was general knowledge anyhow, among non-decision makers! Student evaluations of their teachers have no correlation to their learning. Who would have guessed?! Filling in a couple of questions at the end of term, if anything, indicates at the most popularity, not quality or progress. Male teachers seem to fare better over all, confirming a gender bias.

I particularly like this part:

“The entire notion that we could measure professors’ teaching effectiveness by simple ways such as asking students to answer a few questions about their perceptions of their course experiences, instructors’ knowledge and the like seems unrealistic given well-established findings from cognitive sciences such as strong associations between learning and individual differences including prior knowledge, intelligence, motivation and interest. Individual differences in knowledge and intelligence are likely to influence how much students learn in the same course taught by the same professor.”

and this:

“Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. … these instruments often serve mostly as easily produced publishable units or marketing tools.”

I would add to it that it’s a miserable waste of valuable staff and student time and creates an anxiety that undermines learning.

{lang: 'en-GB'}

There is much uncertainty about ethics and privacy in learning analytics which hampers wider adoption. In a recent article for LAK16, Hendrik Drachsler and I tried to show ways in which trusted learning analytics can be established compliant with existing legislation and the forthcoming General Data Protection Regulation (GDPR) by the European Commission, which will come into force in 2018. In short, four things need to be established:

  • Transparency about the purpose: Make it clear to the users what the purpose of data collection is and who will be able to access it. Let users know that data collection is limited to fulfil only the intended purpose effectively.
  • Informed consent: Get users to agree to data collection and processing, by telling them what data you are collecting, for how long data will be stored, and provide reassurance that none of the data will be open for re-purposing or use by third parties. According to the GDPR, approval can be revoked and data of individual users must then be deleted from the store – this is called “the right to be forgotten”.
  • Anonymise: Replace any identifiable personal information and make the individual not retrievable. In collective settings data can be aggregated to generate abstract metadata models.
  • Data security: Store data, ideally encrypted, in a (physically) safe server environment. Monitor regularly who has access to the data.
{lang: 'en-GB'}

Personalisation is often hailed as a remedy for the “one-size-fits-all” teaching approach. The idea of personalised learning is tightly connected to technology, because it is generally accepted that human resources are limited and not scalable into a one-on-one teaching ratio. Of course, the semantics involved in technology enabled personalisation differs completely from human-to-human personal interactions. In technical terms, it translates into behaviour adaptation to facilitate human computer interaction (such as adhering to technical interoperability standards) or to computer driven decision making (as in “smart” or “intelligent” tools). While this has its merits perhaps in terms of efficiency of learning, it is a galaxy apart from human personalisation, which is based on things like boundary negotiations, respect, or interpersonal “chemistry”. It remains to be seen how the idea of “personalisation” can develop without sacrificing human flexibility and societal congruence. Here are four oft encountered myths around personalisation:

(1) Personalisation is scalable

It is difficult to believe that technology can somehow better serve the individual than a human teacher. Yes, it can serve more people at the same time, but this doesn’t necessarily fit all people on a personal level. A case in hand are MOOCs: large (massive) participation numbers, served by technology dishing out educational resources. Do the learners feel personalised? Probably not as the high drop out rates would suggest or the recent introduction of “flesh-and-blood teachers” by MIT. Maybe MOOCs are scalable but aren’t a good example for personalisation apart from allowing time/space/pace flexibility. However, in general, we can question whether industrialised personalisation or the mass-production of individual learning will ever work.

(2) Personalisation makes better learners

Learning isn’t driven by intrinsic virtues alone. One of the key learning theories, Vygotski’s zone of proximal development, argues strongly for how humans can excel with the help of others. It’s pushing the boundaries that makes them better learners. Personalisation in the sense of letting everyone learn what they would naturally and intrinsically learn has been tried in schooling experiments for quite some time with rather poor results. Some good things, like serendipitous learning will only happen if there are external stimuli. But also corporate knowledge and services could not be upheld if learning was completely individualised. Furthermore, personalised learning doesn’t normally include “learning to learn” components.

Putting the individual in the foreground maybe a nice line to present in technology enhanced learning, but often misses the socialisation aspects of learning that are required for forming a coherently educated democratic society. Human interactions with computer agents will not lead to better citizens since it neglects that aspect of socialisation (not to be confused with social, as in “social networks”). Socialisation involves the development of competences such as tolerance, respect, politeness, agreement, group behaviour, team spirit etc. Computer agents, on the other hand, are driven by mediocrity, algorithms and rules that are non-negotiable. You cannot argue with an “intelligent” machine how to come to a suitable compromise.

(3) Personalisation makes society better and more equal

Personalising the experience of individual learners does not make learning more relevant to them. As we see in many instances like personalised search engines, it leads to more isolation instead of more congruence with others. This leads away from the commons and the common good. It is comparable to mass producing Randian heros of selfish desires, hence I cannot see a benefit for society or equal opportunities.

(4) Abolishing marks makes learning more personal

Learning without pressure and comparison is a noble idea, but contradicts human nature. We are social animals and live with interacting and counteracting other parts of our environment. Gaming theory tells us that in among the oldest parts of our brains it is genetically hard-coded to compete with others, against time, or even with ourselves. We humans need position. We need to know how we compare to others. Others too need to know how we compare to others. Taking school grades away will not make learning more personal in the sense of more self-directed and to your own devices. External pressure is sometimes needed to grow into a challenge.

Even if technical support for personal learning needs would work, we have to ask where this might lead us. Our societies are based on some commonly agreed upon educational standards, such as levels or qualifications reached, or the grading system. It is not to defend these structures, but if we abolish them or change them, something else would have to take their place. Society needs a standardised educational currency to distinguish expertise from pretense. Competence levels and badges are alternative approaches, welcome in their concept, reach and effect, but yet another educational standard structure.

{lang: 'en-GB'}

images

This is an interesting thought: Tore Hoel and Weiquin Chen in their paper for the International Conference on Computers in Education (ICCE 2016) suggest that the forthcoming European data protection regulation (GDPR), which is to be legally implemented in all member states by 2018, actually may drive pedagogy!

As unlikely as this may sound, I think they got a point. The core of the GDPR is about minimisation of data and use limitation. This restricts data collection to specified purposes and prevents re-purposing. It puts a bar on random collection of users’ digital footprints and sharing (selling) them for other – not clearly declared – purposes. This restriction to minimisation and specific use in turn will (perhaps) lead to more focus on the core selling point, i.e. pedagogic application of analytics.

I have previously articulated my concerns that most institutions intending to use LA applications will have to rely on third parties, where, at present, it isn’t obvious that they comply with Privacy by Design and by Default principles as demanded. Additionally to making their case to the educational customers about protecting the data of learners and teachers, there are now more pressures on them to provide tools and services that actually improve learning, not the revenue in advertising or data sharing. So, yes, I am optimistic that Tore and Weiquin are right in saying that this presents “an opportunity to make the design of data protection features in LA systems more driven by pedagogical considerations”!

{lang: 'en-GB'}

I am not sure whether the worrying developments in HE play into the hands of those people advocating disruptive change or the idea of abolishing the HE system altogether. As you can read here below, I am not one of them, as I believe education to be in the common public interest and a matter for society (i.e. the state), not for profit sharks. Still, I note a cumulative deterioration of system components that are driven by the implementation of commercial models in HE institutions.

Direct competition between institutions has been introduced decades ago leading to established market thinking, business cases and student “customers”. However, more recently, the university system developed into a luxury brand, for those who can afford it. The state slowly withdrew itself from the scene via severe cuts and austerities, or, on the student side, dramatically rising fees and costs – with less and less support from the government.

At the same time, the government eyed at private providers, so-called “challenger” institutions to compete with the public sector (and perhaps to later replace it). Very little is known about these private providers according to HESA, which leads to a messy market with bogus degree awarding entities. Some 220 such unauthorised providers were identified over the last 5 years, 80% of them had been closed. This must mean that the cost of patrolling the sector must have exploded too. Judging by the tremendous “success” the rail privatisation had on their customers, it is foreseeable that HE will go down a similar path, only with an even more dramatic knock-on effect on the labour market.

If someone now shrugs their shoulder and says “so what”, I can briefly summarise what we lost in these and similar developments: Gone are free for all studies (in previous days universities were open to everyone!), gone are maintenance grants, and good earnings for post-grads – this spells the end of the widening access agenda and the equal opportunities policy. Long gone, of course, are the days of humanitarian no-profit studies like philosophy or numismatics, Ancient Greek, etc. once departments that could not generate money to make up for the loss in government finances were closed.

The question for the future is whether the reductionist approach to higher education, which will inevitably lead to smaller numbers of academics (and institutions), will in fact lead to a rise in value of pre-university degrees like A-levels and apprenticeships.

{lang: 'en-GB'}

If you are like me on different scholarly social networks simultaneously, you probably asked yourself this same question: why do my analytics diverge so greatly between these platforms?

I have one article that has been cited 208 times on Google Scholar, 106 times on ResearchGate, and only 7 times on Academia.edu. Another more recent one shows a different distribution with only 1 citation on Google Scholar, 0 on ResearchGate, but 17 on Academia.edu.

2016-08-09_133920

There are several possible reasons for this. Firstly, although I cannot exactly remember, I might have uploaded the papers at different times to different platforms so there may be a time lapse issue involved. Secondly, my social networks (following and followers) in these platforms vary, despite a large overlap. Thirdly, the platform audience might differ, as some might be more prominent in one country or language than in another. Fourthly, however, and that’s my point for writing this post, the analytics involved in each tool vary and send out different messages, as I have contemplated in this other post.

The remaining question is what to do with this “information”? Shall I go and add all the sums together? Or are they counting the same citation in every tool? Should I go and boost my profile in the underrepresented platform by filling in more of my personal data, metadata interests, or follow even more people? Shall I perhaps start a marketing campain by spamming people with e-mail links to my articles on platform X? As long as I do not know how the figures are compiled, the system remains a biased blackbox that I can take on surface value. It may even use my figures for some other purpose than merely telling me how popular a scholar I am. Let’s not forget that the providers are in fact competitors in another world, so for them to reassure me that in their platform I get more citations than in the other, they are doing themselves a favour. I myself only have the option to decide which of the figures and platforms I trust and which ones I don’t.

{lang: 'en-GB'}

If you build them yourself, learning analytics tools can do what you expect them to. That is the idealised scenario in the learning analytics community, meaning that in order to get valuable insight and foresight from your learners’ data, one should start with a proper learning analytics design! This includes what data will be collected, how data will be cleaned, what the relevant indicators and weightings are, and how data will be processed using appropriate and tested algorithms.

AAEAAQAAAAAAAAXuAAAAJDA4Njg3ODliLWVjYjQtNGM1MS04YTkzLTM0YTcwODBiNDY5NA

However, more often than not, learning analytics is conducted via third party tools, such as VLE platforms, twitter youtube or facebook APIs, or separately sold tools. These tools are intransparent and sometimes open to changes out of control or even visibility of the user. Using built in analytics tools from third party software requires caution in its interpretation, for the algorithms may be biased toward some purpose other than achieving better understanding of learner behaviours.

Naturally, we cannot assume that every institution will build their own well designed learning analytics environment, and even if they did, modern networked learning using cloud-based services will always limit its scope. I do think, however, that transparency of the underlying engines is important, and that, just like with the terms of service, notification of changes to the algorithms would give a more transparent experience and thus higher validity for learning analytics.

 

{lang: 'en-GB'}

1984

There is too much information in the Information Society! Un-vetted information that is. The readiness of available information leads to circular confirmation of misinformation or misinterpretation of so-called “facts”. There are a number of indicators for this situation:

  • information overload: people exposed to too many news sources suffer from anxiety of (a) missing something (like in a facebook news stream), (b) trusting the source, (c) trusting their own capability of evaluating information to sift out misinformation. It’s connected to the paradox of choice.
  • news loops: news publishers, especially on the internet, are challenged to provide up-to-the-minute news, which leads them to neglect their own analysis and research and instead copy-paste from press agencies. This is why news in all news outlets are to 80-90% identical – including their “own” opinion. Or have you not wondered why some geographic areas suddenly disappear from all news channels? It’s news going round in circles. China’s regulator even went so far as to decree the verification of news stories.
  • social media: up-to-the-minute info by news publishers nowadays references and takes for true postings on social media channels like twitter or facebook. The assumption seems to be that if many people (only the ones connected to twitter and facebook) express a strong feeling about something – then this must be a valid quantitative measure of satisfaction on political and other issues. However, as the run-up to the Brexit votum showed, manipulation and propaganda on social media is on the increase.

This kind of information society does not lead to more self-determination by individuals nor does it empower the powerless. It’s steering rapidly to a 1984 scenario where people are no longer able to distinguish truth from make-believe.

{lang: 'en-GB'}

Next Page »