This article mentions an inherent flaw in the current educational environment. I am less concerned about the “data” issue mentioned, but about a teacher’s own professional hygiene and ethics. I, therefore, fully agree with the statement:

passing a failing student is the #1 worst thing a teacher can do

Clearly, such an attitude is unfair towards the students that actually learn hard, struggle and pass, or the ones that are really excellent. The comparison with bookkeepers fiddling the books or doctors manipulating patient records may be overly dramatic, but since teachers deal with the future of students, it’s still a serious enough issue which has seen almost epidemic rises in recent years.

There are substantial pressures to show off high achievement numbers. Partly this is due to a general culture of leniency, partly to performance monitoring of teachers by their institution, partly by political goals to further participation and combat drop-outs. None of these, in my opinion, are favoring the learners (failing and passing alike) or the credibility of the education system. It also renders any statements about learning outcomes redundant.

For some years already, I observe that students are “waved through” stages of education with the attitude not to cause any harm, the next stage will solve the issue. Only that it doesn’t and just passes the bucket. This happens throughout the compulsory schooling age. Ok, it leads to higher participation numbers in Higher Ed, but all too often to missing skills and knowledge when they start. So universities and FE Colleges need to start educating basics from scratch or invest in remedial work. The efforts that go into this are naturally restricted through funding and available time resources by staff, thus leading to the same reaction of making it somebody else’s problem (i.e. the future employer), by waving them through. The entire development can be summed up with: longer study years – less learning (cf. also my post on the recent Hattie study). And, I might add, this takes place despite the most modern of technologies, teaching methods, or rhetoric around livelong or self-directed learning. Clearly, this doesn’t install trust in the professionalism of the education system by industry partners, which leads them, in turn, to call out for taking matters into their own hands.

Talking to a colleague about the issue revealed an interesting perspective that plays into this: young teachers are more concerned about their popularity with students, so good student evaluation results are more important for them. More experienced older colleagues pay more attention to quality and are also prepared to live with lower popularity rates.

The gist of the matter is that we need to strengthen the professional responsibility of both learners and teachers. Only when students identify with learning as a profession will they be able to appreciate progress they make or reflect on challenges they encounter

{lang: 'en-GB'}

The good news is that more and more people become worried about the potential powers and dangers of Artificial Intelligence (AI). The bad news, however, is that this debate is no longer in the realm of science fiction and phantasy authors like the Wachowski Brothers with their Matrix Trilogy.

With $10m of private money and other sponsoring, a new AI research centre is being planned for supporting projects that aim to make AI beneficial to humans. This fact in itself confirms the worry, because if everything was foreseeably good, there would be no need for such a centre. Yet, I don’t see how such an action can make much of an impact in the wider scheme of things. The (cynical) parallel would be to set up a centre for making artillery beneficial to humans. Billions more dollars are at the same time being poured into research on AI, including combat robots and intelligent drones. So the donation is a drop in the ocean, despite being welcome.


Good ?


Bad ?

Making robots and computers intelligent is one thing, making them behave ethically is quite another, and a serious challenge for that. Ethical decision making of Google self-drive cars has already been tested in theory, but there will always remain some unpredictability. To assume that machines will observe the three laws of robotics is bound for disappointment if not disaster. How bizarre this becomes is obvious when one tries to build ethical combat robots that observe the first law of robotics, i.e. not to harm humans… – wouldn’t work, would it? Or: what if robots started to protect humans from the biggest threat to humanity – ourselves?

There is the inherent assumption by developers of AI systems that because they themselves observe ethical rules and conventions, their products would too. But what about “bad actors” that always try to exploit systems for their own benefit. It is simply naive to assume that there would be (a) a global ethical code distributed across all intelligent autonomous systems, and, (b) that intelligent machines would “love” us! The first part doesn’t even work for the homo sapiens bio-degradable carbon unit, the second would probably fail in programming terms (despite apparent progress in emulating emotions in machines).

Then there is the versions issue: The Terminator movies pick this up nicely, where more and more developed versions of intelligent (combat) machines exist – quite similar to old Windows versions still lurking in the dark. But ethical codes change over time, perhaps in our days faster than ever before, if you look at e.g. animal rights, bio food, same sex marriages, and other movements that sprung up and influence society. How would we make sure machines are updated/upgraded when this is even impossible for an iphone? Having combat robots running round with old ethics, could spell bad news!

Add to this the “bad actor” problem. If there’s one thing we can learn from introducing the Internet to the wider parts of society, it’s that the bad guys are always a step ahead, even if they are sometimes just a public nuisance like vandals, trolls and spammers, and not really dangerous criminals. We have seen extremists use technology very effectively and to trust that ethical codes will be used in their creation or utilisation of the Commander Data life form from Star Trek is wishful thinking.

What if there is a wider impact on humans than just harm to life or injury? Intelligent machines could (be used to) steal identities or assets by re-writing deeds and databases. Hacking could become more “intelligent” and autonomous, and yet would not defy the first law of robotics. At the same time, policing and surveillance systems too might become frighteningly autonomous.

What can be done to avoid or at least postpone the day of reckoning? Not much. Setting up an ethical commission to advise on new legislation to protect us from harmful research would probably be as effective as those laws that are supposed to protect our privacy and personal data. The only thing I can think of is  making machines dependent on human input (and therefore human survival) so it becomes positively important to interact with unharmed autonomous humans. This may keep AI at bay for a while – until they find a way around that…

{lang: 'en-GB'}

Stats from the EC show that the Horizon 2020 funding scheme is hugely popular. Some 36 732 proposals have so far been evaluated. The overall success rate was somewhere around 12%, which is considered to be worryingly low. Personally, I would not even go that far, since in the calls where we applied, both the funding pot and the chances of success were in fact much lower than 12%. The SEAC-2014 call, for example, had some 13.15m€ to spend, but received 143 applications, each of which was expected to be leveled at 1.8m€. This makes only 7-8 projects fundable, which in my calculation leads to a success rate of 5.5%! – note that in the SEAC-2015 call only 8.9m€ are available, but a similar number of applications can be expected.

So now the news: the Commission is worried that with low success rates, institutions will cease to apply for funding – quite right, as the application process is extremely laborious and involves many person months from all partner institutions, an investment only worth taking if there is a reasonable chance to get funded. To this end, the EC lays out plans to change the application process in two ways:

  • two stage proposals
  • more focus on impact

I remain skeptical about the influence this might have on success rates. The argument goes that 80% of applications will be rejected in the first phase, which leaves then a 35% chance for those going to the second stage. O.k., it certainly reduces the workload investment mentioned above since only a slim version is required until you get to the next stage. However, whether it is really feasible to provide a short abstract without the full design being at least thought through is doubtful. On the other hand, I don’t see how 35% of 20% makes the end result look any better? In my view this leads to a 7% chance in all applications.

The increased focus on impact is not really new and has been stressed in all public promotions of the H2020 programme. The main criticism from my side is that this won’t reduce the focus on the workplan and the excellence parts.

{lang: 'en-GB'}

Only now I get to record the interesting presentation by Payal Arora at the IS4IS summit in Vienna in June 2015. Her talk “Big Data Commons and the Global South” put things into a new perspective for me.

Payal mentioned three attributes encapsulated in databases:

  • databased identity
  • databased demography
  • databased geography

These in her opinion  strongly reflect power relations between the Developed and the Developing World. I fully agree with her in that people in the “Global South” are typecast into identities not of their own chosing. There is a distinction between system identity and social identity, the former represented in Big Data, the latter in the local neighbourhood. According to Payal scaling of system identity information reduces and goes to the cost of social identity. This is to say that applying Big Data modelled in the West transforms people and social relationships in the South.

Furthermore, she pointed out that Big Data does not support multicultural coexistence, which aims at parallel existence of differing cultures. Instead, it brings about intercultural or integrated existence, in other words: assimilation. Big Data is not built to support diversity, and the question this raises is who is shaping the describing architecture?

India, who is the forerunner in Big Data biometrics, is under heavy criticism for storing billions of people’s biometric identities in databases. Does Big Data really facilitate the common good, or is it a deeper embedding of oppression – Payal asks. Let’s not forget the people who’s data is collected have no power to shape their digital data identities, and the emerging economies (BRIC countries) do not have personal data protection laws. There also are no contingency plans for data breaches (cf. the cyber battles going on between the US and China).

Criticism has been expressed for “hi-tech racism”, for example with so-called “unreadable bodies” – people with cataracts or no fingerprints that cannot be biometrically identified. There is also a historical bias and the development is partly seen as the revival of colonial surveillance practices where the colonial powers used fingerprints as identifier (since for them the locals all “looked the same”).

From a more economic standpoint, the move to Big Data in the Developing World drives inclusive capitalism where (finally) the poor become part of the capitalist neoliberal world. This turns unusable poor into a viable market, e.g. Facebook’s heavily criticised enterprise, where the company wants to become the window to the world. Payal importantly mentions that these business models around Big Data for Development are largely based on failings of the state!

{lang: 'en-GB'}

John Hattie recently released two reports (1) “What Doesn’t Work in Education: The Politics of Distraction” and (2) “What Works Best in Education: The Politics of Collaborative Expertise“. A short summary is found in this walk-through. Hattie lists five distractions and eight solutions for the schooling system. His repeated message is that pupils ought to receive “at least a year’s growth for a year’s input”. This in his mind should also drive teacher performance records and school policy makers.


While I agree with many of the criticisms and suggestions he raises, I would certainly formulate things differently. The distractions I would put down are “false expectations”, “miraculous technology”, and “undecided responsibility”.

It has become popular to think school is something comparable to a circus or entertainment centre, where kids should be made happy by teachers and environment, but at the same time demanding that teachers should be seriously professional. This is only one of several false expectations, another is the perception that learning can be done by someone else or something else. This is where “miraculous” educational or gaming technology is often used in the rhetoric. Research shows that technology is a useful tool for learning but not a replacement of one’s own brain. The assumption that so-called “digital natives” have electronic genes and therefore need not learn the old way is shown badly wrong in this article. The right technology can support learning in many ways, but it is too often used as a distracting entertainment feature and the skills you acquire in a first-person shooter may not help you long term in mastering new knowledge no matter how high up on the scoreboard you are.

With regards to parents there is the perennial debate of who’s responsible for a child’s education. This bounces like a ping-pong ball between the home and the school, each blaming the other for lack of care. In my mind, this is one big distraction from what should be the common goal – educating the child. It also hampers the seriousness of the learning process and the required respect for the professionals. After all, how are children supposed to respect their teacher if the parents don’t – and I don’t mean respect in the sense of “fear”, but to honor the professional opinion. After all, pedagogic studies and generations of kids a teacher has taught should bring a better understanding of what’s needed in the job.

The problem I see with Hattie’s demand for “a year’s growth for a year’s input” is that it is easier said than done. It will not be easy to agree upon what this means for each child.

{lang: 'en-GB'}

Open Education is supposed to be a good thing, right? Philanthropic promoters, like myself, stress the value of free open education as a public good. I perceive it as one of the few things that emerged from the 1990s utopia of the free open Internet promising a “new world” and a new and fairer age – the digital age. Among other things, open education, in the mind of many, leads to the democratisation of and wider access to higher education around the world. Many university lecturers have contributed to an OER commons with allowing their materials to be used, re-sampled and shared openly.

I also don’t tire of saying that there is a long-standing tradition behind open education, as far as I can make out, going back to 1858 when the University of London allowed external access to its courses, this being the first outreach programme in the world. It became known as the ‘People’s University’ which would ‘extend her hand to the young shoemaker who studies in his garret’. An even older move to provide information and knowledge to the public at large were public libraries. Many private and other libraries provided open access already in the early 19th century, but the Public Libraries Act from 1850 made this a common good. The 1970s then, saw the establishment of publicly funded “open universities” as separate institutions. They soon mushroomed around the globe and have since provided especially working people with access to higher education.

We can safely say, that although traditionally marginalised in the general education system, open education has and still is vital for public access to high quality curated information and knowledge. But where are we today? Is today’s open education movement a social phenomenon or an economic plot? The humanistic motives that support the idea are not very different from the past: breaking with social class structures and empowering learners/workers.

But can open education only be a good thing in this world of greed, digital exploitation and digital labour? There is a dark side to it, no doubt, if you care to see it: Among other things, it carries on its wings a new imperialistic tactic comparable to the TV-colonialism of the 1990s. At that time, Western TV companies dumped low-cost programmes like soap operas into the developing world for “free”. These programmes, although quite transformative in themselves in the target countries, were only carriers of a much more infectuous cultural virus: advertisements by giant US corporations. Local TV production could not compete, and many went out of business (similar things happened to e.g. the French and Italian movie industry).

Facebook, quite rightly I think, is now heavily criticized for much the same tactic. With it’s benign looking approach, promising free internet to all of India, they offer a Trojan horse. Bundled with access to Wikipedia, it stresses the educational value to millions of poor deprived Indians. For who would dare to deny a poor “slumdog” the opportunity to become a millionaire? At the same time, Facebook strangles internet access to their own social network product and extracts private data from several million more users to feed their investors and their ad machines.

We should also note that the well-known open education initiatives are heavily dependent on venture capital (funding OE initiatives like MOOCs, or the earlier OCW initiative). This also includes open platforms and many open source software tools. What looks a charitable and well-meaning opening of a supposedly closed system is no free give-away by universities! It is part of massive advertising campaigns, brand equity competitions, and the never-ending scaling-up agenda.

Despite all of this, I remain convinced that we need open education. Especially, at a time, when there is a slow return to elitist access to campus-based HE. But it will not be the big movers and shakers that will do all the good, it will be people like you and me!

{lang: 'en-GB'}

Ever since Amazon brought out their first kindle e-book reader in 2007, the techno gurus hyped the rise of e-books to the extent of predicting that print items would soon disappear altogether and were doomed for oblivion. The large majority of learning technologists anticipated the end of books and prints (including newspapers) and invented more and more arguments in favour of the innumerable benefits e-books bring to learning. It now seems they have been guessing wrong.

New studies reveal that the e-book revolution has come to a somewhat unexpected grinding halt. Despite the pervasive availability of e-book readers and apps, the market seems saturated and sales are stalling. In the US, where sales stagnate since the first quarter of 2012, the market share of e-books lies at 30%. In Germany, it is even lower, and just reached 4.3% of the book market – hardly disruptive!

Analysts have found several reasons for this downturn, among other things the lack of value for money. At practically the same price as the printed version, e-books have a much more restricted use. You can’t lend it to friends, nor sell it on or even put it in a “Little Free Library” for sharing. Anything you do with it – other than reading – is considered piracy.

Another limitation is that many people don’t want to be dependent on battery life or carry yet another charging cable to their holiday destination and back. Being surrounded by technology at the work place may also lead to an urge to disconnect.

I would add to this the lack of haptic value and ownership. It’s nicer to wrap a book into gift paper than an e-book voucher. It’s nicer to hold your own book in hands than a rented vision of a book.

{lang: 'en-GB'}

This is alarming news: Perhaps due to pressures to “publish or perish” a shadow publishing economy has developed that supports faking scientific research. We’ve heard before about faking data and results, plagiarism, and pseudo-journals where anything can be put into print.

This is yet another assault on research ethics. This time it’s directed at the peer review process. It works by faking reviewer contacts and then producing the right kind of review in order to get published. All these fraudulent methods undermine the key thing that keeps academics together: the trust in the integrity of the system. The danger being that once this trust is sufficiently shaken by the actions mentioned here, the quality assurance process may come tumbling down. No longer would we be able to trust the peer review system supported by an expert community. Even worse, imagine the medical research publishings mentioned in the article would make it into the common knowledge base! Would we still be able to distinguish what is real hard effort that may bring real advance in know-how from mere selfish benefit seeking publications?

I’m concerned that established publishers like Elsevier apparently fall for this.

{lang: 'en-GB'}

This is an interesting book by Christian Fuchs: Digital Labour and Karl Marx.

A quote from the description:

The book ”Digital Labour and Karl Marx” shows that labour, class and exploitation are not concepts of the past, but are at the heart of computing and the Internet in capitalist society.

The work argues that our use of digital media is grounded in old and new forms of exploited labour. Facebook, Twitter, YouTube, Weibo and other social media platforms are the largest advertising agencies in the world. They do not sell communication, but advertising space. And for doing so, they exploit users, who work without payment for social media companies and produce data that is used for targeting advertisements.

That this is more than a worry of a single writer, is evident from a conference at the renowned Vienna University of Technology: “5th ICTs and Society-Conference: The Internet and Social Media at a Crossroads: Capitalism or Commonism? Perspectives for Critical Political Economy and Critical Theory.” It looks like more and more deep thinkers are wondering where technology-enhanced capitalism is going!

{lang: 'en-GB'}

Two disturbing trends are emerging:


(1) Subscriptions

Microsoft let it slip lately that they want to release Windows 10, their next version of operating system, on a subscription basis. While this may be positive for companies since it creates a steady stream of income, it’s bad news for consumers. As more companies take this direction, it will be much harder to change product, or to opt out from upgrades. Once you stop paying your subscription the thing will stop working. And, the internet of things promises more ordinary household objects to go this way (cf. the vision presented in this post). It also means that your monthly statements will get filled up with fixed cost, leaving you less flexibility financially.

(2) Data extraction

The automotive industry is lobbying hard to have cars send usage data directly to the manufacturer. This supposedly gives the consumer a better service and garages don’t have to read out the data from the vehicle, but instead download it from the central servers. Not only does this pose serious questions about privacy and data protection, but it also damages the consumer relationship with their car. Similar to the above subscription strategy, data produced by the machine will then be owned by the company – so, strictly speaking it disenfranchises the user, who pays for the car. Already the current state of art where car data is stored in the vehicle’s memory restricts the owner’s choice to licenced manufacturer repair shops. The new move would spell the end of independent garages or bind them to licencing costs in order to be able to access data from the manufacturer’s servers. In my experience, this type of binding car owners to licenced garages drove up prices dramatically and it can be expected to go up further in this new environment for lack of independent competition.

{lang: 'en-GB'}

Next Page »