I am (positively) surprised at the level of critical self-reflection that’s happening at this year’s Learning Analytics conference (see the twitter stream #LAK16). Even the keynotes by Mireille Hildebrandt and Paul Kirschner questioned the validity and acceptability of using big data in education and highlighted potential dangers. The audience too shared these mixed feelings with questions like: “why should people (e.g. parents) sign up to this? What’s the promise?”

The two critical themes that emerged aren’t technical. They are about the ethical constraints put on the use of personal data and the validity and use for learning. Both these “soft” issues are present in our design framework for LA. The ethical and privacy concerns and how to perhaps deal with them, are discussed in our LAK16 presentation and full paper.

I see this as part of a maturing process of the community. Being enthusiastic about LA is one thing, being aware of the pitfalls and limitations is another. After all, should it turn out to be a dead horse, there is no point in flogging it. On the other hand, if there are benefits that outweigh the counter arguments, then, by all means, we need to have answers. Doing analytics just because we can isn’t a purpose or a justification.

{lang: 'en-GB'}
For some time now, universities have started calling students “customers” and charged them ever rising tuition fees. It seems this message has finally sunk in with them and turning the relationship between students and their institutions on its head in that students are now beginning to see payment of fees as a contract to obtain a qualification in exchange for money.

With accelerating costs to study, students are no longer taking silently whatever is given to them. The marketing machine of modern HE promising excellent services and highest quality studies is being scrutinised and carries the danger for HEIs of being challenged by unsatisfied customers who don’t feel they are receiving value for money. The consequences of this change in attitude can be seen, for example, in the case where a Swedish University College is being sued by a US student whose course did not match the level of quality promised.

I already previously noticed that especially mature students were very wary of how they were serviced on a course. There was a sincere dislike for peer tutoring, peer assessment, flipped classrooms and other innovative models of teaching. They saw their payment as an entitlement for being taught by an “expert” teacher not by other novices! Front-up lectures was what they felt they paid for, and it was quite difficult to change such expectations and to open them up for modern teaching/learning practices.

Following this research report, the THES summarises that “Universities are misleading prospective students by deploying selective data, flattering comparisons and even outright falsehoods in their undergraduate prospectuses”. The Guardian adds “that the prospectus belongs to the “tourist brochure genre”, but that young people don’t always realise that”.

Another possible legal battleground may involve implementations of learning analytics. It is quite conceivable that students before long may sue their university for not acting on data and information the institution holds about them. Universities have a fiduciary duty towards students and their learning achievements. Improved learning information systems and data integration has the potential for ringing alarm bells before a student drops out of a course. At least that is the (sometimes exaggerated) expectation some learning analytics proponents hold. Customers failing will now perhaps claim that the institution did know about their potential failure, but did not act on it.
{lang: 'en-GB'}

There have been a number of recent set-backs in Learning Analytics implementations, among them the closure of the high profile inBloom venture in the US. The cause of this is increased wariness of users about their privacy. While most people enjoy the comfort of amazon’s intelligent product recommendations or facebook’s friends suggestions, people care where their data goes and what happens with it.

Together with my friend and long-time colleague Hendrik Drachsler, we did a study into the fears and hesitations of learners or their guardians about Learning Analytics. We will present these findings at the LAK16 conference in Edinburgh later in April 2016, but our main findings tell us that there is a sincere confusion between the commercial world and the academic world. Educational institutions have a much longer tradition in upholding ethics in research and keeping data private. However, the random collection of personal data, the selling on of that data to third parties, and the repurposing of datasets – all of which happens outside user control! – done by the for-profit commercial data giants Google, Facebook, Amazon, et al. cast their shadows on the mostly benevolent attempts by education establishments, who see it as part of their fiduciary duty to provide intelligence gathered from learning data to students and teachers.

To tackle this issue, we engage in a quest for what we call “Trusted Learning Analytics“. This takes note of the fact that there can be no technical solution to this, nor should we rely on legal changes to “make things possible”. Our proposal to build trust in learning analytics relies mainly on openness, transparency, consent and user empowerment. As part of the LACE (Learning Analytics Community Exchange) project, we developed a guide called the DELICATE checklist – derived from a series of in-depth expert workshops – to help managers in the implementation of LA. You can also find the reference to the full article below the image (click to enlarge).

delicate-checklist-to-establish-trusted-learning-analytics-1-1024

The eight points are [It can be downloaded here LINK]:
1. D-etermination: Decide on the purpose of learning analytics for your institution.
2. E-xplain: Define the scope of data collection and usage.
3. L-egitimate: Explain how you operate within the legal frameworks, refer to the essential legislation.
4. I-nvolve: Talk to stakeholders and give assurances about the data distribution and use.
5. C-onsent: Seek consent through clear consent questions.
6. A-nonymise: De-identify individuals as much as possible
7. T-echnical aspects: Monitor who has access to data, especially in areas with high staff turn-over.
8. E-xternal partners: Make sure externals provide highest data security standards.

The DELICATE checklist shows ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and their institution. The core message is simple really: When you implement Learning Analytics – be open about it!

{lang: 'en-GB'}

At the BETT show 2016 in London three technologies caught my eye. They weren’t exactly new, but had reached a level beyond pure experimentation:

  • learning analytics
  • beacons
  • microbits

The EU funded LACE project (Learning Analytics Community Exchange) presented a few times. I listened to the session at the secondary school podium, where Dutch and Swedish school networks talked about the implementation into their schooling system. It seems promising activities are going on there, despite privacy and ethical concerns held by some stakeholders.

2016-01-21_00009 2016-01-21_00010

Beacons is an indoor location technology that can be used to prompt passersby with helpful information. In commerce, for example, it will tell people about nearby promotional offers. In museums it can push notification about an object to visitors. Nice is that the beacons are weatherproof, so can be used outdoors too. They use low energy bluetooth, so can last some time on one charge.

Microbits reminded me very much of the rasperry pi, which also got presented in another stall at BETT. They are programmable LED chips which can be linked together and to various controlling devices, e.g. mobile phones. An impressive display of things to do with them was on show, but the main purpose, so we are told, is to teach kids programming skills. To me this is another indication that IT skills have become a core skill next to reading, writing, and arithmetic – even at a very young age.

All these bits and bobs are nice to play with, have seen interesting stages of experimentation, but now, it is time to find pedagogic applications for them to achieve some learning.

{lang: 'en-GB'}

I am reviewing papers for next year’s LAK16 conference in Edinburgh. Reading through the submissions, I realised the hype that Learning Analytics enjoys at present in the educational technology and data community and beyond. While this can be considered a positive push into an innovative direction by enthusiasts, it is partly also played as a tactical game by some. What was previously a perfectly acceptable empirical study and educational experiment, is now being re-labelled and sold as Learning Analytics. Of course, these two can have various practical and theoretical overlaps, but, at least in my mind, there are also some notable distinctive characteristics.

I saw this re-labelling happen many times before. My previous university offered so-called “master classes”, which basically were one-week online CPD courses. When the MOOC hype broke out, these webinars, quite instantly, became MOOCs and academics went around shouting out loud “yes, we do MOOCs!”

So what are the differences between traditional empirical studies and Learning Analytics. Among the characteristics are (at least in my understanding) the following:

  • Big Data instead of small samples. We are talking here about a vast pool of educational datasets, not one that is focused on a particular research question.
  • Repetition: Learning Analytics is repeatedly done over the same (or very similar) data pool and data subjects, not a one-off action. LA gives continuous feedback.
  • Automatism and algorithms: Automatic data collection paired with some processing formula that is (automatically) applied onto the dataset rather than hand-made analysis.

I know these characteristics are “quick and dirty” and perhaps neither comprehensive nor undisputable, but in order to focus the future Learning Analytics community on quality of field-related research it is necessary to clarify basic parameters in addition to the by now well-established definitions for Learning Analytics (Siemens, Fergusson, and others).

{lang: 'en-GB'}

This article mentions an inherent flaw in the current educational environment. I am less concerned about the “data” issue mentioned, but about a teacher’s own professional hygiene and ethics. I, therefore, fully agree with the statement:

passing a failing student is the #1 worst thing a teacher can do

Clearly, such an attitude is unfair towards the students that actually learn hard, struggle and pass, or the ones that are really excellent. The comparison with bookkeepers fiddling the books or doctors manipulating patient records may be overly dramatic, but since teachers deal with the future of students, it’s still a serious enough issue which has seen almost epidemic rises in recent years.

There are substantial pressures to show off high achievement numbers. Partly this is due to a general culture of leniency, partly to performance monitoring of teachers by their institution, partly by political goals to further participation and combat drop-outs. None of these, in my opinion, are favoring the learners (failing and passing alike) or the credibility of the education system. It also renders any statements about learning outcomes redundant.

For some years already, I observe that students are “waved through” stages of education with the attitude not to cause any harm, the next stage will solve the issue. Only that it doesn’t and just passes the bucket. This happens throughout the compulsory schooling age. Ok, it leads to higher participation numbers in Higher Ed, but all too often to missing skills and knowledge when they start. So universities and FE Colleges need to start educating basics from scratch or invest in remedial work. The efforts that go into this are naturally restricted through funding and available time resources by staff, thus leading to the same reaction of making it somebody else’s problem (i.e. the future employer), by waving them through. The entire development can be summed up with: longer study years – less learning (cf. also my post on the recent Hattie study). And, I might add, this takes place despite the most modern of technologies, teaching methods, or rhetoric around livelong or self-directed learning. Clearly, this doesn’t install trust in the professionalism of the education system by industry partners, which leads them, in turn, to call out for taking matters into their own hands.

Talking to a colleague about the issue revealed an interesting perspective that plays into this: young teachers are more concerned about their popularity with students, so good student evaluation results are more important for them. More experienced older colleagues pay more attention to quality and are also prepared to live with lower popularity rates.

The gist of the matter is that we need to strengthen the professional responsibility of both learners and teachers. Only when students identify with learning as a profession will they be able to appreciate progress they make or reflect on challenges they encounter

{lang: 'en-GB'}

The good news is that more and more people become worried about the potential powers and dangers of Artificial Intelligence (AI). The bad news, however, is that this debate is no longer in the realm of science fiction and phantasy authors like the Wachowski Brothers with their Matrix Trilogy.

With $10m of private money and other sponsoring, a new AI research centre is being planned for supporting projects that aim to make AI beneficial to humans. This fact in itself confirms the worry, because if everything was foreseeably good, there would be no need for such a centre. Yet, I don’t see how such an action can make much of an impact in the wider scheme of things. The (cynical) parallel would be to set up a centre for making artillery beneficial to humans. Billions more dollars are at the same time being poured into research on AI, including combat robots and intelligent drones. So the donation is a drop in the ocean, despite being welcome.

Robot1

Good ?

Robot2

Bad ?

Making robots and computers intelligent is one thing, making them behave ethically is quite another, and a serious challenge for that. Ethical decision making of Google self-drive cars has already been tested in theory, but there will always remain some unpredictability. To assume that machines will observe the three laws of robotics is bound for disappointment if not disaster. How bizarre this becomes is obvious when one tries to build ethical combat robots that observe the first law of robotics, i.e. not to harm humans… – wouldn’t work, would it? Or: what if robots started to protect humans from the biggest threat to humanity – ourselves?

There is the inherent assumption by developers of AI systems that because they themselves observe ethical rules and conventions, their products would too. But what about “bad actors” that always try to exploit systems for their own benefit. It is simply naive to assume that there would be (a) a global ethical code distributed across all intelligent autonomous systems, and, (b) that intelligent machines would “love” us! The first part doesn’t even work for the homo sapiens bio-degradable carbon unit, the second would probably fail in programming terms (despite apparent progress in emulating emotions in machines).

Then there is the versions issue: The Terminator movies pick this up nicely, where more and more developed versions of intelligent (combat) machines exist – quite similar to old Windows versions still lurking in the dark. But ethical codes change over time, perhaps in our days faster than ever before, if you look at e.g. animal rights, bio food, same sex marriages, and other movements that sprung up and influence society. How would we make sure machines are updated/upgraded when this is even impossible for an iphone? Having combat robots running round with old ethics, could spell bad news!

Add to this the “bad actor” problem. If there’s one thing we can learn from introducing the Internet to the wider parts of society, it’s that the bad guys are always a step ahead, even if they are sometimes just a public nuisance like vandals, trolls and spammers, and not really dangerous criminals. We have seen extremists use technology very effectively and to trust that ethical codes will be used in their creation or utilisation of the Commander Data life form from Star Trek is wishful thinking.

What if there is a wider impact on humans than just harm to life or injury? Intelligent machines could (be used to) steal identities or assets by re-writing deeds and databases. Hacking could become more “intelligent” and autonomous, and yet would not defy the first law of robotics. At the same time, policing and surveillance systems too might become frighteningly autonomous.

What can be done to avoid or at least postpone the day of reckoning? Not much. Setting up an ethical commission to advise on new legislation to protect us from harmful research would probably be as effective as those laws that are supposed to protect our privacy and personal data. The only thing I can think of is  making machines dependent on human input (and therefore human survival) so it becomes positively important to interact with unharmed autonomous humans. This may keep AI at bay for a while – until they find a way around that…

{lang: 'en-GB'}

Stats from the EC show that the Horizon 2020 funding scheme is hugely popular. Some 36 732 proposals have so far been evaluated. The overall success rate was somewhere around 12%, which is considered to be worryingly low. Personally, I would not even go that far, since in the calls where we applied, both the funding pot and the chances of success were in fact much lower than 12%. The SEAC-2014 call, for example, had some 13.15m€ to spend, but received 143 applications, each of which was expected to be leveled at 1.8m€. This makes only 7-8 projects fundable, which in my calculation leads to a success rate of 5.5%! – note that in the SEAC-2015 call only 8.9m€ are available, but a similar number of applications can be expected.

So now the news: the Commission is worried that with low success rates, institutions will cease to apply for funding – quite right, as the application process is extremely laborious and involves many person months from all partner institutions, an investment only worth taking if there is a reasonable chance to get funded. To this end, the EC lays out plans to change the application process in two ways:

  • two stage proposals
  • more focus on impact

I remain skeptical about the influence this might have on success rates. The argument goes that 80% of applications will be rejected in the first phase, which leaves then a 35% chance for those going to the second stage. O.k., it certainly reduces the workload investment mentioned above since only a slim version is required until you get to the next stage. However, whether it is really feasible to provide a short abstract without the full design being at least thought through is doubtful. On the other hand, I don’t see how 35% of 20% makes the end result look any better? In my view this leads to a 7% chance in all applications.

The increased focus on impact is not really new and has been stressed in all public promotions of the H2020 programme. The main criticism from my side is that this won’t reduce the focus on the workplan and the excellence parts.

{lang: 'en-GB'}

Only now I get to record the interesting presentation by Payal Arora at the IS4IS summit in Vienna in June 2015. Her talk “Big Data Commons and the Global South” put things into a new perspective for me.

Payal mentioned three attributes encapsulated in databases:

  • databased identity
  • databased demography
  • databased geography

These in her opinion  strongly reflect power relations between the Developed and the Developing World. I fully agree with her in that people in the “Global South” are typecast into identities not of their own chosing. There is a distinction between system identity and social identity, the former represented in Big Data, the latter in the local neighbourhood. According to Payal scaling of system identity information reduces and goes to the cost of social identity. This is to say that applying Big Data modelled in the West transforms people and social relationships in the South.

Furthermore, she pointed out that Big Data does not support multicultural coexistence, which aims at parallel existence of differing cultures. Instead, it brings about intercultural or integrated existence, in other words: assimilation. Big Data is not built to support diversity, and the question this raises is who is shaping the describing architecture?

India, who is the forerunner in Big Data biometrics, is under heavy criticism for storing billions of people’s biometric identities in databases. Does Big Data really facilitate the common good, or is it a deeper embedding of oppression – Payal asks. Let’s not forget the people who’s data is collected have no power to shape their digital data identities, and the emerging economies (BRIC countries) do not have personal data protection laws. There also are no contingency plans for data breaches (cf. the cyber battles going on between the US and China).

Criticism has been expressed for “hi-tech racism”, for example with so-called “unreadable bodies” – people with cataracts or no fingerprints that cannot be biometrically identified. There is also a historical bias and the development is partly seen as the revival of colonial surveillance practices where the colonial powers used fingerprints as identifier (since for them the locals all “looked the same”).

From a more economic standpoint, the move to Big Data in the Developing World drives inclusive capitalism where (finally) the poor become part of the capitalist neoliberal world. This turns unusable poor into a viable market, e.g. Facebook’s heavily criticised internet.org enterprise, where the company wants to become the window to the world. Payal importantly mentions that these business models around Big Data for Development are largely based on failings of the state!

{lang: 'en-GB'}

John Hattie recently released two reports (1) “What Doesn’t Work in Education: The Politics of Distraction” and (2) “What Works Best in Education: The Politics of Collaborative Expertise“. A short summary is found in this walk-through. Hattie lists five distractions and eight solutions for the schooling system. His repeated message is that pupils ought to receive “at least a year’s growth for a year’s input”. This in his mind should also drive teacher performance records and school policy makers.

2015-07-15_173250

While I agree with many of the criticisms and suggestions he raises, I would certainly formulate things differently. The distractions I would put down are “false expectations”, “miraculous technology”, and “undecided responsibility”.

It has become popular to think school is something comparable to a circus or entertainment centre, where kids should be made happy by teachers and environment, but at the same time demanding that teachers should be seriously professional. This is only one of several false expectations, another is the perception that learning can be done by someone else or something else. This is where “miraculous” educational or gaming technology is often used in the rhetoric. Research shows that technology is a useful tool for learning but not a replacement of one’s own brain. The assumption that so-called “digital natives” have electronic genes and therefore need not learn the old way is shown badly wrong in this article. The right technology can support learning in many ways, but it is too often used as a distracting entertainment feature and the skills you acquire in a first-person shooter may not help you long term in mastering new knowledge no matter how high up on the scoreboard you are.

With regards to parents there is the perennial debate of who’s responsible for a child’s education. This bounces like a ping-pong ball between the home and the school, each blaming the other for lack of care. In my mind, this is one big distraction from what should be the common goal – educating the child. It also hampers the seriousness of the learning process and the required respect for the professionals. After all, how are children supposed to respect their teacher if the parents don’t – and I don’t mean respect in the sense of “fear”, but to honor the professional opinion. After all, pedagogic studies and generations of kids a teacher has taught should bring a better understanding of what’s needed in the job.

The problem I see with Hattie’s demand for “a year’s growth for a year’s input” is that it is easier said than done. It will not be easy to agree upon what this means for each child.

{lang: 'en-GB'}

Next Page »