Legal issues


Taxonomic_Rank_Graph.svg

I always have hesitations about putting people in boxes. Although well-intended to support participation, the widening access agenda for HE supported and promoted this type of thinking. In order to help underrepresented social groups, measures were taken to support women, migrants, the disabled, people from rural backgrounds or poorer neighbourhoods, etc. The remedies were aimed at these identified and defined social categories of deprived people. At the same time, this categorisation stigmatised entire social classes and helped discrimination to be adhesive along the lines of “box”-values through the inherent and inevitable generalisations “disabled people/women/black people/migrants are…”.

It is important to note that any person can pass through several deprived categories during the course of their studies: a student may start as a single young woman, then get married, then become a single parent, having a part-time job, and so on… Of course, anyone breaking a leg while skiing can be temporarily disabled. So the people-to-categories fit isn’t necessarily generally applicable.

The flip side is that measures to improve the situation of one category of people may also benefit others: a disabled ramp can be used by moms pushing prams or elderly ladies with shopping trolleys.

There is, however, an alternative to people categories! Anti-categorisation starts not with the person, but with the context and situation a/any person can find themselves in. The “special needs” concept comes closer to this than the category “disabled”. Defining scenarios that require support measures of one sort or another, goes a long way to more personalisation of student support and hence providing more adequate help to those who need it.

{lang: 'en-GB'}
For some time now, universities have started calling students “customers” and charged them ever rising tuition fees. It seems this message has finally sunk in with them and turning the relationship between students and their institutions on its head in that students are now beginning to see payment of fees as a contract to obtain a qualification in exchange for money.
With accelerating costs to study, students are no longer taking silently whatever is given to them. The marketing machine of modern HE promising excellent services and highest quality studies is being scrutinised and carries the danger for HEIs of being challenged by unsatisfied customers who don’t feel they are receiving value for money. The consequences of this change in attitude can be seen, for example, in the case where a Swedish University College is being sued by a US student whose course did not match the level of quality promised.
I already previously noticed that especially mature students were very wary of how they were serviced on a course. There was a sincere dislike for peer tutoring, peer assessment, flipped classrooms and other innovative models of teaching. They saw their payment as an entitlement for being taught by an “expert” teacher not by other novices! Front-up lectures was what they felt they paid for, and it was quite difficult to change such expectations and to open them up for modern teaching/learning practices.
Following this research report, the THES summarises that “Universities are misleading prospective students by deploying selective data, flattering comparisons and even outright falsehoods in their undergraduate prospectuses”. The Guardian adds “that the prospectus belongs to the “tourist brochure genre”, but that young people don’t always realise that”.
Another possible legal battleground may involve implementations of learning analytics. It is quite conceivable that students before long may sue their university for not acting on data and information the institution holds about them. Universities have a fiduciary duty towards students and their learning achievements. Improved learning information systems and data integration has the potential for ringing alarm bells before a student drops out of a course. At least that is the (sometimes exaggerated) expectation some learning analytics proponents hold. Customers failing will now perhaps claim that the institution did know about their potential failure, but did not act on it.
{lang: 'en-GB'}

Only now I get to record the interesting presentation by Payal Arora at the IS4IS summit in Vienna in June 2015. Her talk “Big Data Commons and the Global South” put things into a new perspective for me.

Payal mentioned three attributes encapsulated in databases:

  • databased identity
  • databased demography
  • databased geography

These in her opinion  strongly reflect power relations between the Developed and the Developing World. I fully agree with her in that people in the “Global South” are typecast into identities not of their own chosing. There is a distinction between system identity and social identity, the former represented in Big Data, the latter in the local neighbourhood. According to Payal scaling of system identity information reduces and goes to the cost of social identity. This is to say that applying Big Data modelled in the West transforms people and social relationships in the South.

Furthermore, she pointed out that Big Data does not support multicultural coexistence, which aims at parallel existence of differing cultures. Instead, it brings about intercultural or integrated existence, in other words: assimilation. Big Data is not built to support diversity, and the question this raises is who is shaping the describing architecture?

India, who is the forerunner in Big Data biometrics, is under heavy criticism for storing billions of people’s biometric identities in databases. Does Big Data really facilitate the common good, or is it a deeper embedding of oppression – Payal asks. Let’s not forget the people who’s data is collected have no power to shape their digital data identities, and the emerging economies (BRIC countries) do not have personal data protection laws. There also are no contingency plans for data breaches (cf. the cyber battles going on between the US and China).

Criticism has been expressed for “hi-tech racism”, for example with so-called “unreadable bodies” – people with cataracts or no fingerprints that cannot be biometrically identified. There is also a historical bias and the development is partly seen as the revival of colonial surveillance practices where the colonial powers used fingerprints as identifier (since for them the locals all “looked the same”).

From a more economic standpoint, the move to Big Data in the Developing World drives inclusive capitalism where (finally) the poor become part of the capitalist neoliberal world. This turns unusable poor into a viable market, e.g. Facebook’s heavily criticised internet.org enterprise, where the company wants to become the window to the world. Payal importantly mentions that these business models around Big Data for Development are largely based on failings of the state!

{lang: 'en-GB'}

This is an interesting book by Christian Fuchs: Digital Labour and Karl Marx.

A quote from the description:

The book ”Digital Labour and Karl Marx” shows that labour, class and exploitation are not concepts of the past, but are at the heart of computing and the Internet in capitalist society.

The work argues that our use of digital media is grounded in old and new forms of exploited labour. Facebook, Twitter, YouTube, Weibo and other social media platforms are the largest advertising agencies in the world. They do not sell communication, but advertising space. And for doing so, they exploit users, who work without payment for social media companies and produce data that is used for targeting advertisements.

That this is more than a worry of a single writer, is evident from a conference at the renowned Vienna University of Technology: “5th ICTs and Society-Conference: The Internet and Social Media at a Crossroads: Capitalism or Commonism? Perspectives for Critical Political Economy and Critical Theory.” It looks like more and more deep thinkers are wondering where technology-enhanced capitalism is going!

{lang: 'en-GB'}

Two disturbing trends are emerging:

3468948-man-with-hands-tied-up-with-chains-behind-the-bars

(1) Subscriptions

Microsoft let it slip lately that they want to release Windows 10, their next version of operating system, on a subscription basis. While this may be positive for companies since it creates a steady stream of income, it’s bad news for consumers. As more companies take this direction, it will be much harder to change product, or to opt out from upgrades. Once you stop paying your subscription the thing will stop working. And, the internet of things promises more ordinary household objects to go this way (cf. the vision presented in this post). It also means that your monthly statements will get filled up with fixed cost, leaving you less flexibility financially.

(2) Data extraction

The automotive industry is lobbying hard to have cars send usage data directly to the manufacturer. This supposedly gives the consumer a better service and garages don’t have to read out the data from the vehicle, but instead download it from the central servers. Not only does this pose serious questions about privacy and data protection, but it also damages the consumer relationship with their car. Similar to the above subscription strategy, data produced by the machine will then be owned by the company – so, strictly speaking it disenfranchises the user, who pays for the car. Already the current state of art where car data is stored in the vehicle’s memory restricts the owner’s choice to licenced manufacturer repair shops. The new move would spell the end of independent garages or bind them to licencing costs in order to be able to access data from the manufacturer’s servers. In my experience, this type of binding car owners to licenced garages drove up prices dramatically and it can be expected to go up further in this new environment for lack of independent competition.

{lang: 'en-GB'}

Yes Scotland

If there’s one thing we can learn from the run up to the Scottish referendum, it’s this: People are not tired of politics, as has often been claimed in and around general elections. They are tired of party politics!!

Especially young voters are keenly involved in finding their way round the socio-political landscape. But, what they want is direct democracy and voting on issues, not parties with whole-sale agendas and manifests, of which they only like some 30 odd % or less. A similar trend could be observed earlier in the emergence of Pirate parties. Maybe an independent Scotland is an opportunity to provide this.

{lang: 'en-GB'}

This day might, finally, mark the end of the egalitarian internet that we know. Neoliberal market ideas have won the long standing battle for net neutrality which dates back several years (see my posts in 2009, 2010, and 2011). A Federal Court decided against the FCC (Federal Communications Commission) and in favour of Verizon.

“The court today struck down the two most important net neutrality rules: one that prevented discrimination in favor of or against websites, and one against outright blocking.” (link)

With this ruling, neoliberalism, the flow of free market forces, would finally be able to take over the rule of the web. The potential implications of this are that you only see what you pay for, or: you can only be seen on the web depending on what you pay for!

{lang: 'en-GB'}

The paradox of freedom in the digital world has long become apparent, but it hasn’t sunk in with many people. The oft promoted idea that people somehow would be freer in a digital world and that computers “allow” for personalisation and more individual self-fulfilment in learning, working and playing is simply false.

Just recently again, I read many avid bloggers bash the one-size-fits-all education system while hailing online self-study. What they don’t realise is this: Computers – even “adaptive” systems – only work on rules! These rules are set by programmers and always start by modelling the generic user and use case, the intention being to apply to every user entity in the same way and unequivocally (sounds like one-size-fits-all? – well it is!). Even in the most adaptive of systems, the world of computers knows no individuals nor does it recognise personal need or want. And most importantly: the rule sets are non-negotiable! It becomes most obvious when an online form doesn’t accept your postal address or cuts off your name due to some formatting rules in the text entry box.

20140106-172228.jpg
Machine learning model

Over time, to accommodate outliers in services like e-government, e-learning, e-business, etc, more detailed rules, models and exceptions have to be created and are built into the system. This does not make the digital world more flexible, but allows for more inclusion and for some, if only limited or illusioned choice. It can hardly be called personalisation as the power of change does not lie with the end user, but with the system engineers.

Moreover, even with the best of intentions, system engineers and designers of, say, learning platforms follow the logic and principles of the competitive market. Hence competing products which are interpreted as “individual choice” and the basis for a personalised online experience are all pretty much the same (e.g. travel portals, VLEs, online banking services, MOOC platforms). They compete with the same (mainstream) users in the same market segment. Personalisation, however, starts with the individual. Ask yourself for example: Where in an adaptive personal learning environment would you send pupils with behavioural difficulties, attention deficit, lack of confidence, disinterest, or dyslexia? – of course, you wouldn’t send them, cause this is personal learning, so you couldn’t. But how would they know and remedy such weaknesses themselves?  This obviously necessitates the understanding that socialisation is an essential part of the upbringing and education of citizens. Would computer systems recognise and adapt according to their needs? Do the mainstream online systems cater for anything other than content transfer and verbal exchange? NB: ‘likes’ and social metadata are not part of human communication.

Another characteristic of the rules and logic based systems is the transparency of the individual it creates. Despite the strong political rhetoric which asks for more transparency, I hold it with Byung-Chul Han that transparency of individuals is an instrument of controlling the masses and only serves the ones in power.

There is, however, one other tested and proven rules based system that we know, which does in my mind do well in personalisation and individual democratic freedom! As unlikely as this may sound, it is the legal code our democracies follow, whether it is case-law based or posited law. Unlike the digital world this is more a boundary framework than a set of determining rules. It doesn’t anticipate or predict individual behaviour or requirements. You are more or less free to do what you like until you overstep the mark or conflict arises. The digital world should be more like this.

 

 

{lang: 'en-GB'}

This is a fascinating article. Although a rather challenging read it’s well worth it. It provides a great insight in how corporate companies turn open source and open content to their advantage! The strategies explained here kept the clever advanced tacticians like IBM and Microsoft out of the tech news while the likes of Apple, Blackboard and others smashed each others heads in publicly with big patent clubs and court battle after court battle!

It goes on to disassemble the Coursera MOOC agreement and explains that behind the nice words of leaving all rights with the authors there is a sinister enclosure happening which renders the content valueless outside the Coursera platform:

“all right, title, and interest in and to enhancements made by Company to the Content in the form of translations, adaptations, captioning, encoding, transcripts or video annotations produced in response to accessibility requests (‘Content Enhancements’) will be exclusively owned by Company.”

In effect, Coursera claims any amendments and platform support for the MOOC, without which, of course, a MOOC isn’t quite the same. Free after JRR Tolkien: “One site to find them, one site to bring them all and in the darkness bind them”!

{lang: 'en-GB'}

At a recent book sprint in London for an Open Education Handbook, I followed an interesting presentation by Phil Barker (CETIS) based on this article.

We all know about the many different interpretations and uses of the word “openness” and “open something” (open source, open data, open educational resources, etc. etc.). The interesting bit here is the moral interpretation of openness.

2013-09-12_111633

One key point to note is that openness can be commercial. Patents can be “open” in that they can allow other developers to build and innovate on them. This means, “open” does not directly equate to “free of charge”, although it can be. “Open” could also refer to the freedom to distribute. Press releases typically operate in this sphere, because their authors are interested in their texts to be copied, distributed, and even re-written by others, whose name will be on the published article.

Where moral interpretations of openness get really interesting is on the dark side of the law. According to this spectrum graph above, and as we all know only too well, the legalities of content are quite often ignored. On the simple most common scale people just ignore IPR and distribution rights. Sharing music and other stuff with friends is common practice (and has been for a long time). This type of (ab)use is very common in education when failing to reference the creator/author of items, or when using copyright protected materials in teaching.

Above and beyond this, producing illegal copies, counterfeit products and/or claiming someone else’s work as one’s own are definitely more serious forms of brute-force “openness” that equates to theft. However, it is quite common practice that in academia professors claim IPR over student work or putting their name on their articles without any major contribution.

 

{lang: 'en-GB'}

Next Page »