Learning Design


Image result for student evaluation of teaching

Ha! Finally, a study that confirms what was general knowledge anyhow, among non-decision makers! Student evaluations of their teachers have no correlation to their learning. Who would have guessed?! Filling in a couple of questions at the end of term, if anything, indicates at the most popularity, not quality or progress. Male teachers seem to fare better over all, confirming a gender bias.

I particularly like this part:

“The entire notion that we could measure professors’ teaching effectiveness by simple ways such as asking students to answer a few questions about their perceptions of their course experiences, instructors’ knowledge and the like seems unrealistic given well-established findings from cognitive sciences such as strong associations between learning and individual differences including prior knowledge, intelligence, motivation and interest. Individual differences in knowledge and intelligence are likely to influence how much students learn in the same course taught by the same professor.”

and this:

“Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. … these instruments often serve mostly as easily produced publishable units or marketing tools.”

I would add to it that it’s a miserable waste of valuable staff and student time and creates an anxiety that undermines learning.

{lang: 'en-GB'}

Personalisation is often hailed as a remedy for the “one-size-fits-all” teaching approach. The idea of personalised learning is tightly connected to technology, because it is generally accepted that human resources are limited and not scalable into a one-on-one teaching ratio. Of course, the semantics involved in technology enabled personalisation differs completely from human-to-human personal interactions. In technical terms, it translates into behaviour adaptation to facilitate human computer interaction (such as adhering to technical interoperability standards) or to computer driven decision making (as in “smart” or “intelligent” tools). While this has its merits perhaps in terms of efficiency of learning, it is a galaxy apart from human personalisation, which is based on things like boundary negotiations, respect, or interpersonal “chemistry”. It remains to be seen how the idea of “personalisation” can develop without sacrificing human flexibility and societal congruence. Here are four oft encountered myths around personalisation:

(1) Personalisation is scalable

It is difficult to believe that technology can somehow better serve the individual than a human teacher. Yes, it can serve more people at the same time, but this doesn’t necessarily fit all people on a personal level. A case in hand are MOOCs: large (massive) participation numbers, served by technology dishing out educational resources. Do the learners feel personalised? Probably not as the high drop out rates would suggest or the recent introduction of “flesh-and-blood teachers” by MIT. Maybe MOOCs are scalable but aren’t a good example for personalisation apart from allowing time/space/pace flexibility. However, in general, we can question whether industrialised personalisation or the mass-production of individual learning will ever work.

(2) Personalisation makes better learners

Learning isn’t driven by intrinsic virtues alone. One of the key learning theories, Vygotski’s zone of proximal development, argues strongly for how humans can excel with the help of others. It’s pushing the boundaries that makes them better learners. Personalisation in the sense of letting everyone learn what they would naturally and intrinsically learn has been tried in schooling experiments for quite some time with rather poor results. Some good things, like serendipitous learning will only happen if there are external stimuli. But also corporate knowledge and services could not be upheld if learning was completely individualised. Furthermore, personalised learning doesn’t normally include “learning to learn” components.

Putting the individual in the foreground maybe a nice line to present in technology enhanced learning, but often misses the socialisation aspects of learning that are required for forming a coherently educated democratic society. Human interactions with computer agents will not lead to better citizens since it neglects that aspect of socialisation (not to be confused with social, as in “social networks”). Socialisation involves the development of competences such as tolerance, respect, politeness, agreement, group behaviour, team spirit etc. Computer agents, on the other hand, are driven by mediocrity, algorithms and rules that are non-negotiable. You cannot argue with an “intelligent” machine how to come to a suitable compromise.

(3) Personalisation makes society better and more equal

Personalising the experience of individual learners does not make learning more relevant to them. As we see in many instances like personalised search engines, it leads to more isolation instead of more congruence with others. This leads away from the commons and the common good. It is comparable to mass producing Randian heros of selfish desires, hence I cannot see a benefit for society or equal opportunities.

(4) Abolishing marks makes learning more personal

Learning without pressure and comparison is a noble idea, but contradicts human nature. We are social animals and live with interacting and counteracting other parts of our environment. Gaming theory tells us that in among the oldest parts of our brains it is genetically hard-coded to compete with others, against time, or even with ourselves. We humans need position. We need to know how we compare to others. Others too need to know how we compare to others. Taking school grades away will not make learning more personal in the sense of more self-directed and to your own devices. External pressure is sometimes needed to grow into a challenge.

Even if technical support for personal learning needs would work, we have to ask where this might lead us. Our societies are based on some commonly agreed upon educational standards, such as levels or qualifications reached, or the grading system. It is not to defend these structures, but if we abolish them or change them, something else would have to take their place. Society needs a standardised educational currency to distinguish expertise from pretense. Competence levels and badges are alternative approaches, welcome in their concept, reach and effect, but yet another educational standard structure.

{lang: 'en-GB'}

This article mentions an inherent flaw in the current educational environment. I am less concerned about the “data” issue mentioned, but about a teacher’s own professional hygiene and ethics. I, therefore, fully agree with the statement:

passing a failing student is the #1 worst thing a teacher can do

Clearly, such an attitude is unfair towards the students that actually learn hard, struggle and pass, or the ones that are really excellent. The comparison with bookkeepers fiddling the books or doctors manipulating patient records may be overly dramatic, but since teachers deal with the future of students, it’s still a serious enough issue which has seen almost epidemic rises in recent years.

There are substantial pressures to show off high achievement numbers. Partly this is due to a general culture of leniency, partly to performance monitoring of teachers by their institution, partly by political goals to further participation and combat drop-outs. None of these, in my opinion, are favoring the learners (failing and passing alike) or the credibility of the education system. It also renders any statements about learning outcomes redundant.

For some years already, I observe that students are “waved through” stages of education with the attitude not to cause any harm, the next stage will solve the issue. Only that it doesn’t and just passes the bucket. This happens throughout the compulsory schooling age. Ok, it leads to higher participation numbers in Higher Ed, but all too often to missing skills and knowledge when they start. So universities and FE Colleges need to start educating basics from scratch or invest in remedial work. The efforts that go into this are naturally restricted through funding and available time resources by staff, thus leading to the same reaction of making it somebody else’s problem (i.e. the future employer), by waving them through. The entire development can be summed up with: longer study years – less learning (cf. also my post on the recent Hattie study). And, I might add, this takes place despite the most modern of technologies, teaching methods, or rhetoric around livelong or self-directed learning. Clearly, this doesn’t install trust in the professionalism of the education system by industry partners, which leads them, in turn, to call out for taking matters into their own hands.

Talking to a colleague about the issue revealed an interesting perspective that plays into this: young teachers are more concerned about their popularity with students, so good student evaluation results are more important for them. More experienced older colleagues pay more attention to quality and are also prepared to live with lower popularity rates.

The gist of the matter is that we need to strengthen the professional responsibility of both learners and teachers. Only when students identify with learning as a profession will they be able to appreciate progress they make or reflect on challenges they encounter

{lang: 'en-GB'}

My response to this article in the HE Chronicle. The MIT’s reaction to the high number of drop-outs in MOOCs is to wonder whether there is nothing wrong with the mode, but rather with the format of providing courses (or semesters) instead of what learners seek, i.e. bits of learning:

“People now buy songs, not albums. They read articles, not newspapers. So why not mix and match learning “modules” rather than lock into 12-week university courses?”

MIT now consider offering modules instead of courses. This, at least in Europe, is no new thing. I have been involved in the modularisation of courses almost two decades ago, the intent then being to provide more flexibility and interdisciplinarity within courses, and efficiency in their delivery. And there is nothing wrong with that, as experience has shown. However, what I read in the article is that the motivation for modularisation is demand-driven only, i.e. “what students want”. Concluding that because students like it short, it therefore is automatically better and successful, is wrong!

I have long been arguing that there is a difference between learning and education. Yes, learning leads to education, but it’s the holistics that counts. Unless the modules are connected (by design within the framework of, say, a course!), learning individual chunks of domain knowledge may be personal, may be enough if you just refresh your memory and already have sufficient other knowledge, may be satisfying, but isn’t enough to qualify for expertise. This is where learning and education differ: an educational qualification includes things that you didn’t consider learning. Leaving learning to the fancies of a student alone isn’t going to empower them. It leads to the much criticised graduates that can’t read properly, that skipped literature or grammar or cultural studies in language learning, etc. or, in other words, people who don’t comprehend the greater picture and therefore are at a disadvantage.

Here’s another example: Just learning to drive a car isn’t quite enough to qualify for modern road traffic, though essentially that is what learners want to achieve. Who ever enjoyed studying the highway codes or technical details of car? Therefore, taking a module of a driving lesson alone isn’t going to cut it. Sure one can always argue about what parts of a curriculum are obligatory and which aren’t, but I think in the bigger picture we got mechanisms that regulate this rather successfully and with the variety needed to provide for personal choices across the educational landscape.

I for one would not like to fly with a pilot who just studied the take-off procedures but no landing…!

 

PS: I acknowledge that for MOOCs the approach of micro-learning and modularisation maybe acceptable when looking at a primarily CPD audience who are interested in updating or refreshing their existing knowledge base. This as I have always argued is the key sector for MOOCs (cf. also Diana Laurillard’s recent analyis in THES).

 

{lang: 'en-GB'}

I just completed the twelve week massive open online course (MOOC) on Complex System Science (Complexity Explorer). It has been a very interesting and enjoyable experience.

SFIcertificate

This is not the first MOOC I attended or at least registered for, so it also led to some general reflection on which of the MOOCs I liked and why. The outcome was quite surprising to myself since it turned out that I do enjoy conservative teaching methods. This, however, is not the full picture and other factors emerged as important favourable conditions.

As indicator for a positive MOOC experience I took the fact that I did not drop out and succeeded in completing the entire course. And here I have to say that for many a MOOC I have shown an initial interest (e.g. MobiMOOC, EduMOOC, etc.), but this vanished during the run-time of the course, sometimes already at the startup phase. Whether to drop out or not depended mostly on time constraints and effort-vs-benefits considerations, i.e. how much do I get out for the time investment I put in. This led to a prioritisation level for each course, and in many cases the MOOC priority over other parts of life was simply too low to persist.

In this post, I won’t go into the debate on what is a MOOC and what isn’t. Let’s just say it’s an open online course covering a specific topic over a (longer) period of time with some sort of syllabus structure. This distinguishes it from a one-off webinar or online hangouts, etc.

To cut to the chase, here is the list of MOOCs I enjoyed and completed, in chronological order:

Note that the courses were of very different nature: CCK11 and LAK11 were so-called connectivist MOOCs whereas the Yale and SFI courses were simple video deliveries of the lecture kind. In the first two, I enjoyed the community aspect of the course. It brought me in contact with similar minded people from other parts of the world in an vivid exchange and led to lasting connections. In the video courses, it was the self-timing component that enabled me to complete. While the former contained some timetabled events, such as weekly debates, the latter were completely free of timetabling. Even after loosing a week or two, I was able to catch up and get back on top.

Yes, the social component in Yale and SFI were underdeveloped or missing (or not used by myself), still this did no harm to my learning. I want to emphasise that I do not quantify my learning into measurable chunks of increased competence or knowledge units. It is merely the feeling of satisfaction to have learned something new of value to myself (be it professional or simply interesting).

What made the courses worth while my time and effort? I thought long and hard about this, and why it was these courses that I completed successfully and not others. What were the commonalities despite them being almost diagonally different in style, purpose and delivery.

The most important criterion I could distill is a deep personal interest in the respective topic (Astrophysics, Learning, Complex Systems). This was an absolutely essential initial motivator to get me onto the journey.

Secondly, an inspired expert enthused about the subject they present. This amplified my initial interest and kept me going. I also have a great interest in quantum physics, but sadly haven’t yet found the enthusiastic provider that brings this about.

Thirdly, a non-threatening environment to learn. Even though some topics were extremely challenging, I credit the presenters with this important attribute.

Finally, the amount of Shannon information content. Shannon’s information theory describes among other things the amount of interesting newness and surprise in pieces of information. All four courses contained a high level of newness for me. It also has to be said that follow-up courses, e.g. LAK12, decreased rapidly in this respect and turned information into noise. Once the noise level over new information becomes irrational, I lose interest very quickly and the effort-benefit ratio turns negative.

{lang: 'en-GB'}

The entire debate about electronic learning, MOOCs and open education hinges on one central question: why do we need teachers? It is often sadly misunderstood what the added value is that teachers bring to learners. For this reason, they are increasingly put up to be replaced by technologies in different guises and roles, for example data algorithms that aim to substitute human judgement, or multiple choice tests instead of continuous qualitative assessment.

It’s time to think about the qualities a teacher needs to have and where they outperform computers, often by miles:

Psychology: especially where family relations are stressed or difficult, teachers are often the first (adult) advisors for children in trouble. Also in other cases (break-up relationships, uneasiness about oneself, etc), the experienced teacher is most likely to notice and able to put the finger on the problem.

Knowledge and Enthusiasm: computers and the Internet contain loads and loads of collective human knowledge (including also piles of unworthy garbage), but they don’t contain wisdom and competence to act on this knowledge. They are also incapable of enthusiasm for a subject discipline – hence they are unable to install excitement in the learner.

Gut feeling and empathy: “a feeling is worth a thousand datasets” (I don’t know who said that, but it should have been said by someone important). Even without being able to articulate and quantify the multitude of granular circumstances that play a part in a learner’s life, a good observant teacher in direct contact with a learner gets a feel for where they are and can pick them up from there. They are able to understand and factor in when and why a learner is distracted, puzzled, or otherwise limited in progressing. Teachers are able to show empathy and understanding for the situation and in most cases are able to mediate them. Note carefully that this complements and goes beyond the help that peers will provide.

Pedagogic qualities and qualifications, therefore, necessarily emphasise not only the knowledge and competences of a teacher in a given subject area, but also their interpersonal aptitude and mental stability. Teachers, nowadays more than ever before, need to be able to cope with criticism from parents, politicians and even CEOs and other outsiders. They need to be able to see through the eyes of the learner and balance their interests with the general context.

Given these demands, it’s clear that not everyone is suited to be a teacher. Allthemore concerning is the fact that these scarce human resources are not given the attention, opportunities and acknowledgement they deserve, in a world that’s drifting to become more like an industrial factory floor dominated by forms and robots than by human conversation.

{lang: 'en-GB'}

HEFCE has announced GBP 5.7m of funding for projects opening up quality resources for teaching and learning in Higher Education. I cannot help feeling the irony of yet another junk of money being thrown at something that has yet to prove that there is demand.

After the Hewlett foundation funded MIT and the OU with substantial multimillion grants for their respective Open Course Ware initiatives, this looks like another attempt to stimulate free sharing of educational resources and courses. But, why should it be any more successful and sustainable than previous versions?

{lang: 'en-GB'}

It may sound far fetched, but Moodle already contains most of the components of a Learning Design editor. A course in Moodle can become a Unit of Learning (UOL) in LD speak. It’s topic or week structure can be interpreted as Acts and at the same time act as Environment where Resources and Services are made available to the learner. Acts are sequential in that the entire cohort (Roles) needs to complete the tasks within (Activities) before proceeding. Activities within individual Acts can be set as HTML text instructions (e.g. read the following piece of text).

I haven’t tried this but think that Roles can be assigned to individual activities, resources or services. This all leaves Moodle at least within grasp of IMS LD Level A, but Level B & C may not be too far off.

There is, of course, a difference in terminology and what Moodle calls a resource and activity isn’t the same in IMS LD, but this isn’t critical for the user frontend and can be translated in a LD-compliant backend, or when exporting to LD xml.

The advantage is, that Moodle is a design editor, that’s close to teachers’ workflows and thinking (one major reason why it has been so successful). Resources and activities can be created straight away, and a runtime exists as soon as real learners are filled into the roles provided. Courses are portable and shareable, and you can apply changes in runtime (e.g. add a new activity).

{lang: 'en-GB'}

While reviewing the new 1.5.7 version of the ReCourse LD authoring tool, it struck me again how unergonomic the specification is. This may explain why IMS LD has not made a break-through in the e-Learning industry since it was published in 2003. Apart from a few programming enthusiasts living off public funding support through the likes of JISC, EU, or similar projects no serious development happened that would have made a difference in the arena of VLE products.

First the teachers were blamed for not being able to express their pedagogy in a standard-compliant format, then the developers for not producing easy enough tools. In the end, though, it is the specification that’s the problem. Its hierarchy is way too deep and wrongly arranged for real design practice.

In IMS LD, the author needs to start their UoL (Unit of Learning) with a method – play – act – activity structure before they can start on activities. This is the cart before the horse, for most teachers I work with start with the activities and never need the upper layer stuff.

The trouble with tools like ReCourse (and previous Reload) is that you spend a long time filling in boxes and structures which in the end just creates an empty shell with no users and limited if any content or services presented in a unattractive way such as plain text html files. What you get is a nothing that you would not want to use with students but is understood by machines, interoperable, and reusable – hallelujah!

{lang: 'en-GB'}

A couple of posts back, I mentioned a new generation of LD editors. I was now privileged to preview another new LD authoring environment code-named ReCourse. Here’s a screenshot:

ReCourse

ReCourse is based on the well-known Reload editor, which was (and still is) the reference implementation for IMS LD. One obstacle to wider uptake of Reload was that it’s difficult to use – too difficult for the average educator with low-level technical skills. ReCourse aims to resolve this. In an initial evaluation, it came out clearly on top of Reload in terms of its usability.

The main advantage of ReCourse is its graphical drag-and-drop interface while at the same time providing the full range of features available in IMS LD. In the left frame you can maintain a kind of portfolio of your designs or your learning objects. In future it is said to also allow easy reuse of parts of UoLs within different designs and even share them with others.

It’s well recognised in the literature that it is extremely difficult to break down the complex structure of IMS LD to a simple understandable level of ordinary users. What’s needed is to hide this complexity from the user actions. ReCourse shows clear improvements into this direction, but isn’t there just yet.

Despite the positive first impression and lack of time to explore further, one thing I am still missing in this prototype is the “story telling” aspect. I still perceive the interface too technically structured and not self-explanatory. It is not clear where to start, what to do next, or when you’re done. The “overview” is an overview of IMS LD rather than of your own design. When I say story telling, I mean that you want to open someone else’s design and be able to understand immediately what it is about (like a story line). Not delving into the guts of some database tables and URIs and piece together a puzzle.

2008-01-30_161835

The properties panel of objects and roles (picture above) does not invite to add your narrative to it (e.g. ‘here I want students to…’) although this may be squeezed into some field called title, resource, or parameters. The system allows you to annotate elements in the main panel, but these are marginalia rather than the main story. My recommendation to the developers would be to envisage the design as one teacher telling another what they are doing in their class – this story needs to be expressed in the tool in the most economic way.

{lang: 'en-GB'}

Next Page »