Tue 9 Feb 2016
Tue 2 Feb 2016
Comments Off on DELICATE privacy in Learning Analytics
There have been a number of recent set-backs in Learning Analytics implementations, among them the closure of the high profile inBloom venture in the US. The cause of this is increased wariness of users about their privacy. While most people enjoy the comfort of amazon’s intelligent product recommendations or facebook’s friends suggestions, people care where their data goes and what happens with it.
Together with my friend and long-time colleague Hendrik Drachsler, we did a study into the fears and hesitations of learners or their guardians about Learning Analytics. We will present these findings at the LAK16 conference in Edinburgh later in April 2016, but our main findings tell us that there is a sincere confusion between the commercial world and the academic world. Educational institutions have a much longer tradition in upholding ethics in research and keeping data private. However, the random collection of personal data, the selling on of that data to third parties, and the repurposing of datasets – all of which happens outside user control! – done by the for-profit commercial data giants Google, Facebook, Amazon, et al. cast their shadows on the mostly benevolent attempts by education establishments, who see it as part of their fiduciary duty to provide intelligence gathered from learning data to students and teachers.
To tackle this issue, we engage in a quest for what we call “Trusted Learning Analytics“. This takes note of the fact that there can be no technical solution to this, nor should we rely on legal changes to “make things possible”. Our proposal to build trust in learning analytics relies mainly on openness, transparency, consent and user empowerment. As part of the LACE (Learning Analytics Community Exchange) project, we developed a guide called the DELICATE checklist – derived from a series of in-depth expert workshops – to help managers in the implementation of LA. You can also find the reference to the full article below the image (click to enlarge).
The eight points are [It can be downloaded here LINK]:
1. D-etermination: Decide on the purpose of learning analytics for your institution.
2. E-xplain: Define the scope of data collection and usage.
3. L-egitimate: Explain how you operate within the legal frameworks, refer to the essential legislation.
4. I-nvolve: Talk to stakeholders and give assurances about the data distribution and use.
5. C-onsent: Seek consent through clear consent questions.
6. A-nonymise: De-identify individuals as much as possible
7. T-echnical aspects: Monitor who has access to data, especially in areas with high staff turn-over.
8. E-xternal partners: Make sure externals provide highest data security standards.
The DELICATE checklist shows ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and their institution. The core message is simple really: When you implement Learning Analytics – be open about it!
Sat 30 Jan 2016
Comments Off on Trends in learning technologies 2016
At the BETT show 2016 in London three technologies caught my eye. They weren’t exactly new, but had reached a level beyond pure experimentation:
- learning analytics
The EU funded LACE project (Learning Analytics Community Exchange) presented a few times. I listened to the session at the secondary school podium, where Dutch and Swedish school networks talked about the implementation into their schooling system. It seems promising activities are going on there, despite privacy and ethical concerns held by some stakeholders.
Beacons is an indoor location technology that can be used to prompt passersby with helpful information. In commerce, for example, it will tell people about nearby promotional offers. In museums it can push notification about an object to visitors. Nice is that the beacons are weatherproof, so can be used outdoors too. They use low energy bluetooth, so can last some time on one charge.
Microbits reminded me very much of the rasperry pi, which also got presented in another stall at BETT. They are programmable LED chips which can be linked together and to various controlling devices, e.g. mobile phones. An impressive display of things to do with them was on show, but the main purpose, so we are told, is to teach kids programming skills. To me this is another indication that IT skills have become a core skill next to reading, writing, and arithmetic – even at a very young age.
All these bits and bobs are nice to play with, have seen interesting stages of experimentation, but now, it is time to find pedagogic applications for them to achieve some learning.
Mon 9 Nov 2015
Comments Off on Learning Analytics – hype re-labelling of matter
I am reviewing papers for next year’s LAK16 conference in Edinburgh. Reading through the submissions, I realised the hype that Learning Analytics enjoys at present in the educational technology and data community and beyond. While this can be considered a positive push into an innovative direction by enthusiasts, it is partly also played as a tactical game by some. What was previously a perfectly acceptable empirical study and educational experiment, is now being re-labelled and sold as Learning Analytics. Of course, these two can have various practical and theoretical overlaps, but, at least in my mind, there are also some notable distinctive characteristics.
I saw this re-labelling happen many times before. My previous university offered so-called “master classes”, which basically were one-week online CPD courses. When the MOOC hype broke out, these webinars, quite instantly, became MOOCs and academics went around shouting out loud “yes, we do MOOCs!”
So what are the differences between traditional empirical studies and Learning Analytics. Among the characteristics are (at least in my understanding) the following:
- Big Data instead of small samples. We are talking here about a vast pool of educational datasets, not one that is focused on a particular research question.
- Repetition: Learning Analytics is repeatedly done over the same (or very similar) data pool and data subjects, not a one-off action. LA gives continuous feedback.
- Automatism and algorithms: Automatic data collection paired with some processing formula that is (automatically) applied onto the dataset rather than hand-made analysis.
I know these characteristics are “quick and dirty” and perhaps neither comprehensive nor undisputable, but in order to focus the future Learning Analytics community on quality of field-related research it is necessary to clarify basic parameters in addition to the by now well-established definitions for Learning Analytics (Siemens, Fergusson, and others).
Mon 10 Aug 2015
Comments Off on Passing failing students
This article mentions an inherent flaw in the current educational environment. I am less concerned about the “data” issue mentioned, but about a teacher’s own professional hygiene and ethics. I, therefore, fully agree with the statement:
passing a failing student is the #1 worst thing a teacher can do
Clearly, such an attitude is unfair towards the students that actually learn hard, struggle and pass, or the ones that are really excellent. The comparison with bookkeepers fiddling the books or doctors manipulating patient records may be overly dramatic, but since teachers deal with the future of students, it’s still a serious enough issue which has seen almost epidemic rises in recent years.
There are substantial pressures to show off high achievement numbers. Partly this is due to a general culture of leniency, partly to performance monitoring of teachers by their institution, partly by political goals to further participation and combat drop-outs. None of these, in my opinion, are favoring the learners (failing and passing alike) or the credibility of the education system. It also renders any statements about learning outcomes redundant.
For some years already, I observe that students are “waved through” stages of education with the attitude not to cause any harm, the next stage will solve the issue. Only that it doesn’t and just passes the bucket. This happens throughout the compulsory schooling age. Ok, it leads to higher participation numbers in Higher Ed, but all too often to missing skills and knowledge when they start. So universities and FE Colleges need to start educating basics from scratch or invest in remedial work. The efforts that go into this are naturally restricted through funding and available time resources by staff, thus leading to the same reaction of making it somebody else’s problem (i.e. the future employer), by waving them through. The entire development can be summed up with: longer study years – less learning (cf. also my post on the recent Hattie study). And, I might add, this takes place despite the most modern of technologies, teaching methods, or rhetoric around livelong or self-directed learning. Clearly, this doesn’t install trust in the professionalism of the education system by industry partners, which leads them, in turn, to call out for taking matters into their own hands.
Talking to a colleague about the issue revealed an interesting perspective that plays into this: young teachers are more concerned about their popularity with students, so good student evaluation results are more important for them. More experienced older colleagues pay more attention to quality and are also prepared to live with lower popularity rates.
The gist of the matter is that we need to strengthen the professional responsibility of both learners and teachers. Only when students identify with learning as a profession will they be able to appreciate progress they make or reflect on challenges they encounter
Wed 22 Jul 2015
Comments Off on AI and the Day of Reckoning
The good news is that more and more people become worried about the potential powers and dangers of Artificial Intelligence (AI). The bad news, however, is that this debate is no longer in the realm of science fiction and phantasy authors like the Wachowski Brothers with their Matrix Trilogy.
With $10m of private money and other sponsoring, a new AI research centre is being planned for supporting projects that aim to make AI beneficial to humans. This fact in itself confirms the worry, because if everything was foreseeably good, there would be no need for such a centre. Yet, I don’t see how such an action can make much of an impact in the wider scheme of things. The (cynical) parallel would be to set up a centre for making artillery beneficial to humans. Billions more dollars are at the same time being poured into research on AI, including combat robots and intelligent drones. So the donation is a drop in the ocean, despite being welcome.
Making robots and computers intelligent is one thing, making them behave ethically is quite another, and a serious challenge for that. Ethical decision making of Google self-drive cars has already been tested in theory, but there will always remain some unpredictability. To assume that machines will observe the three laws of robotics is bound for disappointment if not disaster. How bizarre this becomes is obvious when one tries to build ethical combat robots that observe the first law of robotics, i.e. not to harm humans… – wouldn’t work, would it? Or: what if robots started to protect humans from the biggest threat to humanity – ourselves?
There is the inherent assumption by developers of AI systems that because they themselves observe ethical rules and conventions, their products would too. But what about “bad actors” that always try to exploit systems for their own benefit. It is simply naive to assume that there would be (a) a global ethical code distributed across all intelligent autonomous systems, and, (b) that intelligent machines would “love” us! The first part doesn’t even work for the homo sapiens bio-degradable carbon unit, the second would probably fail in programming terms (despite apparent progress in emulating emotions in machines).
Then there is the versions issue: The Terminator movies pick this up nicely, where more and more developed versions of intelligent (combat) machines exist – quite similar to old Windows versions still lurking in the dark. But ethical codes change over time, perhaps in our days faster than ever before, if you look at e.g. animal rights, bio food, same sex marriages, and other movements that sprung up and influence society. How would we make sure machines are updated/upgraded when this is even impossible for an iphone? Having combat robots running round with old ethics, could spell bad news!
Add to this the “bad actor” problem. If there’s one thing we can learn from introducing the Internet to the wider parts of society, it’s that the bad guys are always a step ahead, even if they are sometimes just a public nuisance like vandals, trolls and spammers, and not really dangerous criminals. We have seen extremists use technology very effectively and to trust that ethical codes will be used in their creation or utilisation of the Commander Data life form from Star Trek is wishful thinking.
What if there is a wider impact on humans than just harm to life or injury? Intelligent machines could (be used to) steal identities or assets by re-writing deeds and databases. Hacking could become more “intelligent” and autonomous, and yet would not defy the first law of robotics. At the same time, policing and surveillance systems too might become frighteningly autonomous.
What can be done to avoid or at least postpone the day of reckoning? Not much. Setting up an ethical commission to advise on new legislation to protect us from harmful research would probably be as effective as those laws that are supposed to protect our privacy and personal data. The only thing I can think of is making machines dependent on human input (and therefore human survival) so it becomes positively important to interact with unharmed autonomous humans. This may keep AI at bay for a while – until they find a way around that…
Thu 16 Jul 2015
Comments Off on EC tackles low success rate in H2020
Stats from the EC show that the Horizon 2020 funding scheme is hugely popular. Some 36 732 proposals have so far been evaluated. The overall success rate was somewhere around 12%, which is considered to be worryingly low. Personally, I would not even go that far, since in the calls where we applied, both the funding pot and the chances of success were in fact much lower than 12%. The SEAC-2014 call, for example, had some 13.15m€ to spend, but received 143 applications, each of which was expected to be leveled at 1.8m€. This makes only 7-8 projects fundable, which in my calculation leads to a success rate of 5.5%! – note that in the SEAC-2015 call only 8.9m€ are available, but a similar number of applications can be expected.
So now the news: the Commission is worried that with low success rates, institutions will cease to apply for funding – quite right, as the application process is extremely laborious and involves many person months from all partner institutions, an investment only worth taking if there is a reasonable chance to get funded. To this end, the EC lays out plans to change the application process in two ways:
- two stage proposals
- more focus on impact
I remain skeptical about the influence this might have on success rates. The argument goes that 80% of applications will be rejected in the first phase, which leaves then a 35% chance for those going to the second stage. O.k., it certainly reduces the workload investment mentioned above since only a slim version is required until you get to the next stage. However, whether it is really feasible to provide a short abstract without the full design being at least thought through is doubtful. On the other hand, I don’t see how 35% of 20% makes the end result look any better? In my view this leads to a 7% chance in all applications.
The increased focus on impact is not really new and has been stressed in all public promotions of the H2020 programme. The main criticism from my side is that this won’t reduce the focus on the workplan and the excellence parts.
Thu 16 Jul 2015
Comments Off on Big Data and the Developing World
Only now I get to record the interesting presentation by Payal Arora at the IS4IS summit in Vienna in June 2015. Her talk “Big Data Commons and the Global South” put things into a new perspective for me.
Payal mentioned three attributes encapsulated in databases:
- databased identity
- databased demography
- databased geography
These in her opinion strongly reflect power relations between the Developed and the Developing World. I fully agree with her in that people in the “Global South” are typecast into identities not of their own chosing. There is a distinction between system identity and social identity, the former represented in Big Data, the latter in the local neighbourhood. According to Payal scaling of system identity information reduces and goes to the cost of social identity. This is to say that applying Big Data modelled in the West transforms people and social relationships in the South.
Furthermore, she pointed out that Big Data does not support multicultural coexistence, which aims at parallel existence of differing cultures. Instead, it brings about intercultural or integrated existence, in other words: assimilation. Big Data is not built to support diversity, and the question this raises is who is shaping the describing architecture?
India, who is the forerunner in Big Data biometrics, is under heavy criticism for storing billions of people’s biometric identities in databases. Does Big Data really facilitate the common good, or is it a deeper embedding of oppression – Payal asks. Let’s not forget the people who’s data is collected have no power to shape their digital data identities, and the emerging economies (BRIC countries) do not have personal data protection laws. There also are no contingency plans for data breaches (cf. the cyber battles going on between the US and China).
Criticism has been expressed for “hi-tech racism”, for example with so-called “unreadable bodies” – people with cataracts or no fingerprints that cannot be biometrically identified. There is also a historical bias and the development is partly seen as the revival of colonial surveillance practices where the colonial powers used fingerprints as identifier (since for them the locals all “looked the same”).
From a more economic standpoint, the move to Big Data in the Developing World drives inclusive capitalism where (finally) the poor become part of the capitalist neoliberal world. This turns unusable poor into a viable market, e.g. Facebook’s heavily criticised internet.org enterprise, where the company wants to become the window to the world. Payal importantly mentions that these business models around Big Data for Development are largely based on failings of the state!
Wed 15 Jul 2015
Comments Off on A year’s worth of progress
John Hattie recently released two reports (1) “What Doesn’t Work in Education: The Politics of Distraction” and (2) “What Works Best in Education: The Politics of Collaborative Expertise“. A short summary is found in this walk-through. Hattie lists five distractions and eight solutions for the schooling system. His repeated message is that pupils ought to receive “at least a year’s growth for a year’s input”. This in his mind should also drive teacher performance records and school policy makers.
While I agree with many of the criticisms and suggestions he raises, I would certainly formulate things differently. The distractions I would put down are “false expectations”, “miraculous technology”, and “undecided responsibility”.
It has become popular to think school is something comparable to a circus or entertainment centre, where kids should be made happy by teachers and environment, but at the same time demanding that teachers should be seriously professional. This is only one of several false expectations, another is the perception that learning can be done by someone else or something else. This is where “miraculous” educational or gaming technology is often used in the rhetoric. Research shows that technology is a useful tool for learning but not a replacement of one’s own brain. The assumption that so-called “digital natives” have electronic genes and therefore need not learn the old way is shown badly wrong in this article. The right technology can support learning in many ways, but it is too often used as a distracting entertainment feature and the skills you acquire in a first-person shooter may not help you long term in mastering new knowledge no matter how high up on the scoreboard you are.
With regards to parents there is the perennial debate of who’s responsible for a child’s education. This bounces like a ping-pong ball between the home and the school, each blaming the other for lack of care. In my mind, this is one big distraction from what should be the common goal – educating the child. It also hampers the seriousness of the learning process and the required respect for the professionals. After all, how are children supposed to respect their teacher if the parents don’t – and I don’t mean respect in the sense of “fear”, but to honor the professional opinion. After all, pedagogic studies and generations of kids a teacher has taught should bring a better understanding of what’s needed in the job.
The problem I see with Hattie’s demand for “a year’s growth for a year’s input” is that it is easier said than done. It will not be easy to agree upon what this means for each child.
Sun 14 Jun 2015
Open Education is supposed to be a good thing, right? Philanthropic promoters, like myself, stress the value of free open education as a public good. I perceive it as one of the few things that emerged from the 1990s utopia of the free open Internet promising a “new world” and a new and fairer age – the digital age. Among other things, open education, in the mind of many, leads to the democratisation of and wider access to higher education around the world. Many university lecturers have contributed to an OER commons with allowing their materials to be used, re-sampled and shared openly.
I also don’t tire of saying that there is a long-standing tradition behind open education, as far as I can make out, going back to 1858 when the University of London allowed external access to its courses, this being the first outreach programme in the world. It became known as the ‘People’s University’ which would ‘extend her hand to the young shoemaker who studies in his garret’. An even older move to provide information and knowledge to the public at large were public libraries. Many private and other libraries provided open access already in the early 19th century, but the Public Libraries Act from 1850 made this a common good. The 1970s then, saw the establishment of publicly funded “open universities” as separate institutions. They soon mushroomed around the globe and have since provided especially working people with access to higher education.
We can safely say, that although traditionally marginalised in the general education system, open education has and still is vital for public access to high quality curated information and knowledge. But where are we today? Is today’s open education movement a social phenomenon or an economic plot? The humanistic motives that support the idea are not very different from the past: breaking with social class structures and empowering learners/workers.
But can open education only be a good thing in this world of greed, digital exploitation and digital labour? There is a dark side to it, no doubt, if you care to see it: Among other things, it carries on its wings a new imperialistic tactic comparable to the TV-colonialism of the 1990s. At that time, Western TV companies dumped low-cost programmes like soap operas into the developing world for “free”. These programmes, although quite transformative in themselves in the target countries, were only carriers of a much more infectuous cultural virus: advertisements by giant US corporations. Local TV production could not compete, and many went out of business (similar things happened to e.g. the French and Italian movie industry).
Facebook, quite rightly I think, is now heavily criticized for much the same tactic. With it’s benign looking internet.org approach, promising free internet to all of India, they offer a Trojan horse. Bundled with access to Wikipedia, it stresses the educational value to millions of poor deprived Indians. For who would dare to deny a poor “slumdog” the opportunity to become a millionaire? At the same time, Facebook strangles internet access to their own social network product and extracts private data from several million more users to feed their investors and their ad machines.
We should also note that the well-known open education initiatives are heavily dependent on venture capital (funding OE initiatives like MOOCs, or the earlier OCW initiative). This also includes open platforms and many open source software tools. What looks a charitable and well-meaning opening of a supposedly closed system is no free give-away by universities! It is part of massive advertising campaigns, brand equity competitions, and the never-ending scaling-up agenda.
Despite all of this, I remain convinced that we need open education. Especially, at a time, when there is a slow return to elitist access to campus-based HE. But it will not be the big movers and shakers that will do all the good, it will be people like you and me!