There have been persistent calls for transparency and algorithmic accountability in learning analytics. Quite recently, there was a discussion at an LASI event in Denmark on that topic.

There are good arguments for more transparency in developing and delivering learning analytics products. Presumably, teachers can derive better informed interventions from visualisations of learning data when they understand what goes in, how it is weighted and processed, and what comes out.

However, the discussion also moved very much into the direction of “personalised learning analytics” with questions like “at what point of comprehension educators are happy to trust products”, or asks whether this might be achieved if “the analytics system demonstrates to your satisfaction that it is attending to the same signals that you value”.  It goes on to challenge vendors (and researchers) that “information should be available and understandable to different kinds of learning scientists and learning professionals”. Ulla Lunde Ringtved asks: “do we need a kind of product declaration and standardization rules to secure user knowledge about their systems?”

I think this is going too far, without much hope and without much value to end users. After all, we are talking about “products”, i.e. ready made things. Vendors would not and could not deliver out-of -the-box, build-your-own, tweak-the-data, customise-the-algorithmic-process learning analytics tools. And data consumers would not want it! Teachers and students are surrounded by black boxes of all kinds, including Google, Blackboard and other VLEs, Facebook, etc. There is evidence that lack of transparency has no correlation to trust. In our lives, we don’t understand most of the tools that we use: the digital camera, the electronic alarm clock, and so forth. And we don’t have to! As long as they work.

{lang: 'en-GB'}