Legal issues


Happy were the days when righteous Hollywood heroes persisted in their search for the truth despite all obstacles and deterrents.

The reality is, of course, very different. We live in the post-truth era, where (true) information is declining in value and belief is everything!

The bizarre thing is that we have more data than ever before, yet less facts can be deducted from it with certainty. Why is this? Let’s start with the economy, where data has been long used to find the best solution to anything and investment. Economic data despite its abundance holds any answer someone searches for. Algorithms dictate the value of stocks, prices and futures. They change in milliseconds and interact with one another – they are connected. People cannot keep up with the speed of machines and have no way of understanding what triggers events in machine agents. People then resort to plausibility, trust and beliefs (such as brand value). Companies can no longer be valued in real worth or assets, only in virtual billions that could be halved the next day.

News is another information channel that has become unreliable as a source in the search for facts and “truth”. News media are profit making organisations that feed on popularity not on truthfulness. What’s popular doesn’t have to be true. Any scandal is better than the truth. News media love Trump because he provides them with popular stories. Trump loves the media because they give him publicity and he doesn’t have to admit to anything they write about him, no matter how scandalous. Post-truth holds no proof. Even if the Intelligence Service would publish the Russian link to the election hack – it could be easily refuted as fake news or dark plot against him. Secret services are not employed to serve the truth, but to serve the enacted politics of the government (and their own interests). The electorate knows this. People only believe what they want to hear to confirm their beliefs!

Even in criminal courts finding an objective truth isn’t as easy as one would expect with all the forensic tools now available. Why else would we find this proliferation of criminal and terrorist activities with comparatively little or no convictions for lack of proof? Risks for criminals or extremists of getting caught and convicted are minimal – which itself sounds contradictory to the age of Big Data, ubiquitous surveillance, and integrated systems.

Far from finding an objective truth, even subjective truths are an endangered species. In a post-truth society an almost religious belief in perceived realities has developed that finds confirmation and amplification wherever data is abundant and where sources are better connected. Proof can no longer be provided.

{lang: 'en-GB'}

images

This is an interesting thought: Tore Hoel and Weiquin Chen in their paper for the International Conference on Computers in Education (ICCE 2016) suggest that the forthcoming European data protection regulation (GDPR), which is to be legally implemented in all member states by 2018, actually may drive pedagogy!

As unlikely as this may sound, I think they got a point. The core of the GDPR is about minimisation of data and use limitation. This restricts data collection to specified purposes and prevents re-purposing. It puts a bar on random collection of users’ digital footprints and sharing (selling) them for other – not clearly declared – purposes. This restriction to minimisation and specific use in turn will (perhaps) lead to more focus on the core selling point, i.e. pedagogic application of analytics.

I have previously articulated my concerns that most institutions intending to use LA applications will have to rely on third parties, where, at present, it isn’t obvious that they comply with Privacy by Design and by Default principles as demanded. Additionally to making their case to the educational customers about protecting the data of learners and teachers, there are now more pressures on them to provide tools and services that actually improve learning, not the revenue in advertising or data sharing. So, yes, I am optimistic that Tore and Weiquin are right in saying that this presents “an opportunity to make the design of data protection features in LA systems more driven by pedagogical considerations”!

{lang: 'en-GB'}

Taxonomic_Rank_Graph.svg

I always have hesitations about putting people in boxes. Although well-intended to support participation, the widening access agenda for HE supported and promoted this type of thinking. In order to help underrepresented social groups, measures were taken to support women, migrants, the disabled, people from rural backgrounds or poorer neighbourhoods, etc. The remedies were aimed at these identified and defined social categories of deprived people. At the same time, this categorisation stigmatised entire social classes and helped discrimination to be adhesive along the lines of “box”-values through the inherent and inevitable generalisations “disabled people/women/black people/migrants are…”.

It is important to note that any person can pass through several deprived categories during the course of their studies: a student may start as a single young woman, then get married, then become a single parent, having a part-time job, and so on… Of course, anyone breaking a leg while skiing can be temporarily disabled. So the people-to-categories fit isn’t necessarily generally applicable.

The flip side is that measures to improve the situation of one category of people may also benefit others: a disabled ramp can be used by moms pushing prams or elderly ladies with shopping trolleys.

There is, however, an alternative to people categories! Anti-categorisation starts not with the person, but with the context and situation a/any person can find themselves in. The “special needs” concept comes closer to this than the category “disabled”. Defining scenarios that require support measures of one sort or another, goes a long way to more personalisation of student support and hence providing more adequate help to those who need it.

{lang: 'en-GB'}
For some time now, universities have started calling students “customers” and charged them ever rising tuition fees. It seems this message has finally sunk in with them and turning the relationship between students and their institutions on its head in that students are now beginning to see payment of fees as a contract to obtain a qualification in exchange for money.
With accelerating costs to study, students are no longer taking silently whatever is given to them. The marketing machine of modern HE promising excellent services and highest quality studies is being scrutinised and carries the danger for HEIs of being challenged by unsatisfied customers who don’t feel they are receiving value for money. The consequences of this change in attitude can be seen, for example, in the case where a Swedish University College is being sued by a US student whose course did not match the level of quality promised.
I already previously noticed that especially mature students were very wary of how they were serviced on a course. There was a sincere dislike for peer tutoring, peer assessment, flipped classrooms and other innovative models of teaching. They saw their payment as an entitlement for being taught by an “expert” teacher not by other novices! Front-up lectures was what they felt they paid for, and it was quite difficult to change such expectations and to open them up for modern teaching/learning practices.
Following this research report, the THES summarises that “Universities are misleading prospective students by deploying selective data, flattering comparisons and even outright falsehoods in their undergraduate prospectuses”. The Guardian adds “that the prospectus belongs to the “tourist brochure genre”, but that young people don’t always realise that”.
Another possible legal battleground may involve implementations of learning analytics. It is quite conceivable that students before long may sue their university for not acting on data and information the institution holds about them. Universities have a fiduciary duty towards students and their learning achievements. Improved learning information systems and data integration has the potential for ringing alarm bells before a student drops out of a course. At least that is the (sometimes exaggerated) expectation some learning analytics proponents hold. Customers failing will now perhaps claim that the institution did know about their potential failure, but did not act on it.
{lang: 'en-GB'}

There have been a number of recent set-backs in Learning Analytics implementations, among them the closure of the high profile inBloom venture in the US. The cause of this is increased wariness of users about their privacy. While most people enjoy the comfort of amazon’s intelligent product recommendations or facebook’s friends suggestions, people care where their data goes and what happens with it.

Together with my friend and long-time colleague Hendrik Drachsler, we did a study into the fears and hesitations of learners or their guardians about Learning Analytics. We will present these findings at the LAK16 conference in Edinburgh later in April 2016, but our main findings tell us that there is a sincere confusion between the commercial world and the academic world. Educational institutions have a much longer tradition in upholding ethics in research and keeping data private. However, the random collection of personal data, the selling on of that data to third parties, and the repurposing of datasets – all of which happens outside user control! – done by the for-profit commercial data giants Google, Facebook, Amazon, et al. cast their shadows on the mostly benevolent attempts by education establishments, who see it as part of their fiduciary duty to provide intelligence gathered from learning data to students and teachers.

To tackle this issue, we engage in a quest for what we call “Trusted Learning Analytics“. This takes note of the fact that there can be no technical solution to this, nor should we rely on legal changes to “make things possible”. Our proposal to build trust in learning analytics relies mainly on openness, transparency, consent and user empowerment. As part of the LACE (Learning Analytics Community Exchange) project, we developed a guide called the DELICATE checklist – derived from a series of in-depth expert workshops – to help managers in the implementation of LA. You can also find the reference to the full article below the image (click to enlarge).

delicate-checklist-to-establish-trusted-learning-analytics-1-1024

The eight points are [It can be downloaded here LINK]:
1. D-etermination: Decide on the purpose of learning analytics for your institution.
2. E-xplain: Define the scope of data collection and usage.
3. L-egitimate: Explain how you operate within the legal frameworks, refer to the essential legislation.
4. I-nvolve: Talk to stakeholders and give assurances about the data distribution and use.
5. C-onsent: Seek consent through clear consent questions.
6. A-nonymise: De-identify individuals as much as possible
7. T-echnical aspects: Monitor who has access to data, especially in areas with high staff turn-over.
8. E-xternal partners: Make sure externals provide highest data security standards.

The DELICATE checklist shows ways to design and provide privacy conform Learning Analytics that can benefit all stakeholders and keep control with the users themselves and within the established trusted relationship between them and their institution. The core message is simple really: When you implement Learning Analytics – be open about it!

{lang: 'en-GB'}

Only now I get to record the interesting presentation by Payal Arora at the IS4IS summit in Vienna in June 2015. Her talk “Big Data Commons and the Global South” put things into a new perspective for me.

Payal mentioned three attributes encapsulated in databases:

  • databased identity
  • databased demography
  • databased geography

These in her opinion  strongly reflect power relations between the Developed and the Developing World. I fully agree with her in that people in the “Global South” are typecast into identities not of their own chosing. There is a distinction between system identity and social identity, the former represented in Big Data, the latter in the local neighbourhood. According to Payal scaling of system identity information reduces and goes to the cost of social identity. This is to say that applying Big Data modelled in the West transforms people and social relationships in the South.

Furthermore, she pointed out that Big Data does not support multicultural coexistence, which aims at parallel existence of differing cultures. Instead, it brings about intercultural or integrated existence, in other words: assimilation. Big Data is not built to support diversity, and the question this raises is who is shaping the describing architecture?

India, who is the forerunner in Big Data biometrics, is under heavy criticism for storing billions of people’s biometric identities in databases. Does Big Data really facilitate the common good, or is it a deeper embedding of oppression – Payal asks. Let’s not forget the people who’s data is collected have no power to shape their digital data identities, and the emerging economies (BRIC countries) do not have personal data protection laws. There also are no contingency plans for data breaches (cf. the cyber battles going on between the US and China).

Criticism has been expressed for “hi-tech racism”, for example with so-called “unreadable bodies” – people with cataracts or no fingerprints that cannot be biometrically identified. There is also a historical bias and the development is partly seen as the revival of colonial surveillance practices where the colonial powers used fingerprints as identifier (since for them the locals all “looked the same”).

From a more economic standpoint, the move to Big Data in the Developing World drives inclusive capitalism where (finally) the poor become part of the capitalist neoliberal world. This turns unusable poor into a viable market, e.g. Facebook’s heavily criticised internet.org enterprise, where the company wants to become the window to the world. Payal importantly mentions that these business models around Big Data for Development are largely based on failings of the state!

{lang: 'en-GB'}

This is an interesting book by Christian Fuchs: Digital Labour and Karl Marx.

A quote from the description:

The book ”Digital Labour and Karl Marx” shows that labour, class and exploitation are not concepts of the past, but are at the heart of computing and the Internet in capitalist society.

The work argues that our use of digital media is grounded in old and new forms of exploited labour. Facebook, Twitter, YouTube, Weibo and other social media platforms are the largest advertising agencies in the world. They do not sell communication, but advertising space. And for doing so, they exploit users, who work without payment for social media companies and produce data that is used for targeting advertisements.

That this is more than a worry of a single writer, is evident from a conference at the renowned Vienna University of Technology: “5th ICTs and Society-Conference: The Internet and Social Media at a Crossroads: Capitalism or Commonism? Perspectives for Critical Political Economy and Critical Theory.” It looks like more and more deep thinkers are wondering where technology-enhanced capitalism is going!

{lang: 'en-GB'}

Two disturbing trends are emerging:

3468948-man-with-hands-tied-up-with-chains-behind-the-bars

(1) Subscriptions

Microsoft let it slip lately that they want to release Windows 10, their next version of operating system, on a subscription basis. While this may be positive for companies since it creates a steady stream of income, it’s bad news for consumers. As more companies take this direction, it will be much harder to change product, or to opt out from upgrades. Once you stop paying your subscription the thing will stop working. And, the internet of things promises more ordinary household objects to go this way (cf. the vision presented in this post). It also means that your monthly statements will get filled up with fixed cost, leaving you less flexibility financially.

(2) Data extraction

The automotive industry is lobbying hard to have cars send usage data directly to the manufacturer. This supposedly gives the consumer a better service and garages don’t have to read out the data from the vehicle, but instead download it from the central servers. Not only does this pose serious questions about privacy and data protection, but it also damages the consumer relationship with their car. Similar to the above subscription strategy, data produced by the machine will then be owned by the company – so, strictly speaking it disenfranchises the user, who pays for the car. Already the current state of art where car data is stored in the vehicle’s memory restricts the owner’s choice to licenced manufacturer repair shops. The new move would spell the end of independent garages or bind them to licencing costs in order to be able to access data from the manufacturer’s servers. In my experience, this type of binding car owners to licenced garages drove up prices dramatically and it can be expected to go up further in this new environment for lack of independent competition.

{lang: 'en-GB'}

Yes Scotland

If there’s one thing we can learn from the run up to the Scottish referendum, it’s this: People are not tired of politics, as has often been claimed in and around general elections. They are tired of party politics!!

Especially young voters are keenly involved in finding their way round the socio-political landscape. But, what they want is direct democracy and voting on issues, not parties with whole-sale agendas and manifests, of which they only like some 30 odd % or less. A similar trend could be observed earlier in the emergence of Pirate parties. Maybe an independent Scotland is an opportunity to provide this.

{lang: 'en-GB'}

This day might, finally, mark the end of the egalitarian internet that we know. Neoliberal market ideas have won the long standing battle for net neutrality which dates back several years (see my posts in 2009, 2010, and 2011). A Federal Court decided against the FCC (Federal Communications Commission) and in favour of Verizon.

“The court today struck down the two most important net neutrality rules: one that prevented discrimination in favor of or against websites, and one against outright blocking.” (link)

With this ruling, neoliberalism, the flow of free market forces, would finally be able to take over the rule of the web. The potential implications of this are that you only see what you pay for, or: you can only be seen on the web depending on what you pay for!

{lang: 'en-GB'}

Next Page »