Skip To Content

Technology Enhanced Knowledge Research Institute (TEKRI)

TEKRI blogs

Open Learning Analytics

elearnspace (George Siemens) - April 11, 2014 - 12:15

The future of systems such as business, government, and education will be data centric. Historically, humanity has made sense of the world through discourse, dialogue, artifacts, myth, story, and metaphor. While those sensemaking approaches won’t disappear, they will be augmented by data and analytics.

Educators often find analytics frustrating. After all, how can you analyze the softer aspects of learning? Or can analytics actually measure what matters instead of what is readily accessible in terms of data? These are obviously important questions. Regardless of how they are answered, however, ours is a data-rich world and will only continue to become more so. All educators need to be familiar with data and analytics approaches, including machine and deep learning models. Why does it matter? Well, to use a Ted Nelson quote that Jim Groom used during his excellent talk at Sloan-C this week, it matters “because we live in media as fish live in water”. Power and control decisions are being made at the data and analytics level of educational institutions. If academics, designers, and teachers are not able to participate in those conversations, they essentially abdicate their voice.

About five years ago, a few colleagues (Shane Dawson, Simon Buckingham Shum, Caroline Haythornthwaite, and Dragan Gasevic) and I got together with a great group of folks and organized the 1st International Conference in Learning Analytics and Knowledge (complete with a logo that any web users of the 1990s would love). Our interest primarily focused on the growing influence of data around educational decisions and that an empirical research community did not exist to respond to bold proclamations being made by vendors about learning analytics. Since then, a community of researchers and practitioners has developed. The Society for Learning Analytics Research was formed, hosting summer institutes, our annual conference, journal, and a distributed doctoral research lab.

Today we are pleased to announce two new initiatives that we feel will raise the quality of learning analytics, increase transparency around data and algorithms, and create an ecosystem where results can be shared, tested, and validated:

1. Open Learning Analytics. This initiative is based on a paper that we published (.pdf) several years ago. After significant behind-the-scenes work, we are now ready to announce the next steps of the project formally. See here for press release and project scope.

2. Learning Analytics Masters Program (LAMP). The number of masters programs that are offering learning analytics courses, streams or certificates is increasing. Several institutions are in the process of developing a masters in learning analytics. To help provided quality curriculum and learning resources, we have launched LAMP: an open access, openly licensed learning analytics masters program. Institutions will be able to use/remix/do whatever with the content in developing their masters programs. Our inaugural meeting is being held at Carnegie Mellon University in a few weeks to kick off this project and start developing the course content.

If data is the future of education and educational decision making, and in many ways it is, I believe openness is the best premise on which to advance. The projects presented here are our contribution in making that happen.

Where is Higher Education’s Digital Dividend?

Terry Anderson's blog - April 9, 2014 - 06:16
One doesn’t need to devour political or economic analysis, listen to experts or even chat with one’s friends to realize that the Internet has changed the way we produce and consume information and the myriad ways in which we communicate. Blogs, wikis and Facebook walls have granted to each of us –a multimedia printing press […]

Does teaching presence matter in a MOOC?

Terry Anderson's blog - March 13, 2014 - 09:23
A recent study of a Coursera MOOC is really interesting in that it implemented a random assignment of student to 2 conditions – one with no teacher interaction with the students and the other with teacher and teacher assistant interaction in forums. The study is Tomkin, J. H., & Charlevoix, D. (2014). Do professors matter?: […]

What will universities monetize in the future?

elearnspace (George Siemens) - March 12, 2014 - 22:56

Universities do more than teach. Research is one of the most important activities of higher education. From the lens of students and society, however, the teaching and learning process and what it costs, is the primary focus.

The university economic and operational structure, in relation to educating learners, can be seen as consisting of three legs of a stool: content/curriculum, teaching, and assessment. The past decade has not been kind to higher education’s economic model as two legs of the stool – content and teaching – have started to move toward openness. Academic resources can now be found from top universities around the world. If I was tasked with designing a course from scratch, I would start by searching repositories, rather than creating any new content.

More recently, the teaching leg of the stool is seeing stress. Open online courses now make lectures of faculty from elite universities accessible to learners around the world (minus a few countries on the US “we don’t like” list).

This leaves assessment as the last leg of economic value. The badges and competency-based learning movement may challenge assessment, but at this point it remains reasonably secure.

What will universities do in the future to monetize their value? I offer the image below – instead of monetizing learning, content, and teaching, universities in the future will monetize assessment and the process of filling learner knowledge gaps. Content is largely free/open. Teaching is becoming more free/open. If something can be duplicated with only limited additional expense, it cannot serve as a value point for higher education. Creating personalized and adaptive learning processes that account for the personal knowledge graph of a learner is, and likely will continue, to be a source of value economically for universities.

University of Texas at Arlington

elearnspace (George Siemens) - March 12, 2014 - 22:40

This is likely not news to most readers as it has been posted in various blogs, forums, and announced at the MOOC Research conference in December, but I have applied, and received approval, for a leave of absence from Athabasca University to establish and set up a digital learning research lab at University of Texas at Arlington. I will be based in Arlington, but will continue to work with my AU doctoral students.

My research to date has focused on the social and technological learning, sensemaking and wayfinding activities of individuals in digital information environments and how these actions inform the design of learning, curriculum and ultimately institutions. At the core of this research is how people interact with information. When information is limited, it can be assessed and understood individually or through social interactions with peers. When information is abundant, technology extends human cognition and capacity for sensemaking. How people use technology and social methods to make the world sensible, and the types of knowledge institutions required to assist that process, is what we hope to address through the Learning Innovation & Networked Knowledge (LINK) Research Lab.

A key second goal at UTA will be the development of a digital learning research network. Just like local-only classrooms no longer make sense, research institutions that work only within a small local domain don’t make sense. I’m particularly interested in understanding how we can connect excellent research with practical implementation. More is known about quality learning in literature than is reflected in classrooms and online courses. The digital learning research network is expected to bring those two domains together.

Five myths about Moocs | Opinion | Times Higher Education

Jon Dron's bookmarks - February 12, 2014 - 14:10

Diana Laurillard chipping in with a perceptive set of observations, most interestingly describing education as a personal client industry, in which tutor/student ratios are remarkably consistent at around 1:25, so it is no great surprise that it doesn't scale up. Seems to me that she is quite rightly attacking a particular breed of EdX, Coursera, etc xMOOC but it doesn't have to be this way, and she carefully avoids discussing *why* that ratio is really needed - her own writings and her variant on conversation theory suggest there might be alternative ways of looking at this.

Her critique that xMOOCs appear to succeed only for those that already know how to be self-guided learners is an old chestnut that hits home. She is right in saying that MOOCs (xMOOCs) are pretty poor educational vehicles if the only people who benefit are those that can already drive, and it supports her point about the need for actual teachers for most people *if* we continue to teach in a skeuomorphic manner, copying the form of traditional courses without thinking why we do what we do and how courses actually work.

For me this explains clearly once again that the way MOOCs are being implemented is wrong and that we have to get away from the 'course' part of the acronym, and start thinking about what learners really need, rather than what universities want to give them. 

Address of the bookmark: http://www.timeshighereducation.co.uk/comment/opinion/five-myths-about-moocs/2010480.article

Cargo cult courses

Jon Dron's blog - February 6, 2014 - 15:03

Richard Feynman dismissed quite a lot of allegedly scientific research, notably targeting the fields of education and psychology, as 'cargo cult science' . Such research looks like science, just as the South Seas cargo cults' airports looked like airports, with 'runways', fires along the side of them to mimic landing lights, wooden control towers and headphones made of coconut shells with bars of bamboo made to look like antennas on the heads of the 'air traffic controller'.  But this didn't make aeroplanes full of cargo land any more than what looks like a scientific method makes something into science. 

Things haven't changed much since Feynman came up with the metaphor, at least in education. My heart sinks when I receive yet another 'experimental' study that demonstrates empirically that some general kind of intervention works better than some other general kind of intervention because, with a vanishingly small number of exceptions, mostly for very trivial issues, it shows no such thing. 

But that's not what this post is about.

I'm going to be writing more soon on how LMSs are cargo cult technologies and have a book in progress that looks at why scientific methods are risky for educational research, but my target for today is MOOCs. Specifically, I am talking about the majority xMOOCs here, the ones that look like traditional university courses, that tend to be the products of EdX or Coursera and their ilk, rather than the very different and far more interesting cMOOCs that started the trend and that employ very different and un-courselike methods, or those that are designed differently and that do more to recognize the unique issues of the modality beyond a bit of lecture-chunking. The brush that I am about to use to tar MOOCs is specifically aimed at the mainstream varieties that typically take a conventional university taught course and make it available to the public, with little more than minor cosmetic changes.

Cargo cult MOOCs

Accreditation is a huge elephant in the MOOC room. Sadly, to a large extent, in a traditional institution courses exist for accreditation far more than for what they teach. I wish it were not so but a great many courses are driven by grades and the need for a certificate, and are often designed accordingly, with learning outcomes made to fit assessment problems in a systematic way, often separating assessment from learning and making assessment the purpose rather than a functional component of the learning process. Accreditation in MOOCs is a big problem and most try to avoid more than a certificate of completion although, contrarily, many use the trappings of accreditation without the power to reliably provide it. They mimic the methods and processes for no good reason. Cargo cult courses.

Courses slot into programs, are part of an educational infrastructure. Courses are designed to fit into broader systems that they belong to, gain credence from peer review and richly interlocked processes across the wider educational landscape. Courses and associated accreditation are as much a currency in the job market as they are a means to learn something. Courses play a ritual role in how people define themselves - courses are rites of passage, ways to share with others the fact that we have trodden on the coals. I proudly proclaim my role in the world by prefixing my name with 'Dr'. It's a sign that I have been through a formal process, leaped over a set of hurdles, slain the beast. I wear my doctoral gown and correct people who try to call me 'Mr' precisely because courses play a much bigger role than containers for learning. 'Completion' is a meaningful and important term in a traditional course because it implies something more than just an acquisition of skills or knowledge: it demonstrates that a very particular kind of obstacle has been overcome in a socially significant way. To a large extent, MOOCs bypass all of this.

Courses fit into schedules. That's why we do bizarre things like split them into uniformly sized units and make them all last for multiples of the same period. Outside an institutional context where resources need to be managed in a certain way, that makes no sense whatsoever. I'm pleased to note that this aspect of MOOCs is slowly changing as people realize they do not have to fit semesters and timetables any more, but it is still way too common a pattern.

Courses fit into rules and regulations that determine how they are developed, what they are expected to achieve, the roles teachers and students should play within them, hierarchies of control, attendance expectations, behavioural norms, commitment expectations, structural patterns, and many other things. MOOCs largely lack that surrounding set of rules, norms and assumptions, yet are typically designed as though it were still in place.

MOOCs replicate course patterns, often skeuomorphically, without reflecting on why those patterns exist. Take lectures, for instance. These made sense at some point as a relatively cost-effective compromise that distributed a rare and limited resource (the professor) to a relatively large number of people. In meatspace, that is not a terrible idea, at least in principle. You can certainly see how it made sense and why it had to happen. In cyberspace there is no reason apart from habit to do that and yet, with a small amount of lip service paid to some basic learning theory about chunking, lectures play a dominant role in most MOOCs.

MOOCs do industrialization in spades, with few concessions to customization, adaptivity or personalization. They present methods and content that are aimed at a largely non-existent average student and that simply cannot cater well for all. We build courses like that in institutions because it is a fairly cost-effective compromise: partly we know we can cater for differences in other ways such as personal tutoring or customizing lectures in classes, and partly we know students will be driven to succeed regardless of what we do because of their commitment and the investment they have made in the educational process. They are not going to deliberately fail one course when their degrees depend on it, no matter how terribly it is taught or how irrelevant it is to them. MOOCs do not have such fallbacks.

In brief, courses of the kind we build in schools and universities make little sense if the objective is to support learning rather than to run an institution, and one that plays a very particular set of societal roles and that contains many codependent structures of which courses form a part. MOOCs appear to want to support learning but continue to be designed as though they were also doing the other stuff.

It doesn't have to be this way. The approaches used in connectivist MOOCs, for example, make a lot of sense if the purpose is to gain knowledge. There have also been some very interesting experiments with tools like Venture Lab or courses using open pedagogies that are not so objectives-oriented or content driven. MOOCs are evolving and the target of my criticism is becoming smaller, but it is still way too common.

The takeaway

MOOCs (xMOOCs) look like courses, smell like courses, use the same methods as courses, have teachers and students much like courses, but they are no more courses than the airports of the cargo cults on South Seas islands were airports. Just like airports need aircraft, schedules, and a host of organizations and processes to function, so courses are deeply and fundamentally embedded in their contexts and make little sense at all without them.

Is Athabasca University moving away from tutoring? Not exactly.

Jon Dron's blog - February 1, 2014 - 17:52

The ever wonderful and thought-provoking Tony Bates asks in an inflammatory post whether Athabasca University is moving away from a tutoring model. The short answer is 'not exactly'.  The overlong and fuller answer follows...

We are not getting rid of tutoring and it's not exactly a call centre...or maybe it is...or maybe we are...

As I understand it, the Faculty of Science and Technology (FST) has no intention of changing the way that students interact with tutors. Students will continue to engage with a single tutor much as they do now, with the bonus that they will get faster, more accurate and less frustrating responses when they seek information about the administrative or operational issues that make up a signficant portion of support calls. This is thanks to directing some communications through a knowledge management and gatekeeper tool built on Grey Matter. Secondly, we describe this as a 'support centre', not a 'call centre' because that is a clearer description of what it does: it's about helping students to find support with their learning, via a knowledge base and from the people that know the answers, who are not always their tutors. Telephones are very rarely used nowadays, especially in the School of Computing and Information Systems that makes up a sizeable chunk of FST, so 'call centre' is a misnomer. However, under the new system, students and tutors should interact much as they have always done, directly, personally and socially. In fact, it ought to be a more connected, supportive system, if it is done right.

At least that's my understanding. But what is fascinating is that a lot of people who you might imagine would be well-informed appear to have many different views that conflict with mine and with one another in all sorts of different ways. The people who have responded to Tony's article give some quite diverse and contradictory responses, our own AUFA mailing list is full of different interpretations. Our own VPA has shared a version of it with the university staff that is not identical to the way that people charged actually implementing it at FST have explained it. All of this might be quite surprising unless you understand how AU is constructed, of which more below. But first...

A worm inside an otherwise perfectly decent apple

There is one very sad aspect to the support centre idea that needs to be pulled before it is too late. At the moment, tutors get paid per block of students, a fixed sum that largely disregards the actual workload involved for any particular student or course but that averages out fairly sensibly. In the new system, the plan is (allegedly - it is hard to get facts straight) that tutors will get paid for contact time recorded in the system.  This is based on the fiction that you can accurately quantify learner support by the hour or by message sent. This fiction has two sides to it, both bad:

  1. There is a fair chance that tutors will get short-changed in this process and, at the very least, will not be able to rely on a regular income any more. The chances are that, financially speaking, some will gain and some will lose, but I'm guessing the average is not going to be in tutors' favour. 
  2. Whether tutor wages rise or fall this will be highly demotivating. It takes away control from tutors to decide how best to manage relationships with students and, worse, it makes reward hinge on an extrinsic and irrelevant measure of work that has nothing whatsoever to do with the value of that work. Performance-based pay at the best of times is a thoroughly and comprehensively discredited idea in itself, but the approach being implemented here lacks even the poor justification that it employs a meaningful measure of performance.

The solution is simple and obvious - just decouple the good idea from the bad one and get rid of this noxious worm inside it or, at the very least, delay introducing it until proper consultation and research has been conducted on its impact. The two ideas are completely independent and separable.

But the fact that anyone even entertained the idea in the first place, let alone set it in motion, is disturbing. My colleagues are smart and mean well, so it is hard to understand how they could even think of such an idea. This leads me to the main point of this post and the most significant thrust of Tony's article.

Bigger issues

I came to AU in 2007 because it had (and still has) the finest, largest and most diverse collection of world-class online and distance learning researchers in the world, together with an infrastructure and process model that can and should revolutionize distance learning. Attending meetings at AU is often like going to a top international online/distance learning conference and hanging out with the best and most interesting keynote speakers. However, at the time I arrived, I did not realize how difficult it would be to diffuse the innovations of those thought-leaders into practice, nor how entrenched the Otto Peters-style industrial model of production, geared around postal service and telephone, might be. I was astonished that, despite having many of the top online learning researchers in the world and a set of learning designers and support staff that is second to none, at the time only around half the courses at the university were actually fully online, and few of those were a model of good practice, using pedestrian if mostly functional pedagogies and failing to take full advantage of even the simple technologies they ran on. It was here that I learned the term 'text-book wraparound', which is as bad as it sounds. It is not that courses at AU were, on average, worse than elsewhere - I have seen many that are much worse, including from the Ivy League and Oxbridge as well as other online institutions. They just weren't the shining examples that I had assumed I would find. This has changed for the better considerably over the past few years, and a much larger number of AU courses are now worthy of pride and admiration, with the majority that I have seen being significantly above average for the field, and most are online. But it is shocking that we still have any courses that are still wrapped around textbooks, that rely mainly on paper or, worse, are part of a surprisingly large number that made the transition from paper to network without even the slightest change in pedagogy, as though the technology made no difference and the rest of the world had not become filled with serious competitors who can also teach at a distance quite well. I am just as shocked by the still-too-common use of unseen written exams sat at physical exam centres which, again, are transitioning to an electronic format with no changes to the underpinning methods and procedures, further entrenching and automating something we should not be doing in the first place. The same thing is happening with e-books, to which we are transitioning without more than a passing thought to how it changes things, what more we can do as a result, and where the pitfalls lie (apart from in dully operational areas and costs), despite a lengthy and complex analysis and consultation process to get there. Equally strange, there is remarkably little use of the advanced pedagogies and technologies that our stonkingly good researchers invent, outside their own teaching (if even that). Given the superfluity of brilliant researchers in the field at AU, this weird state of affairs needs to be examined a little to find out why it should be happening.

Structural challenges

We are victims of the same rules of transactional distance that govern our teaching. The less dialogue we have, the more structure we impose to fill the void, and vice versa (incidentally Shearer, one of those who first noticed this relationship in the early 90s, has done some more interesting work recently on clarifying how it works). Because our workforce as well as our students are distributed, there is far less of the oil of informal communication that keeps less distributed institutions running despite their manifold structural failings. With relatively little non-focused informal dialogue we consequently harden systems, building structure, thereby making them brittle and more difficult to change.

The triumph of structure is all the more inevitable because our teaching was historically largely structure-oriented rather than dialogic, so we developed habits and patterns of working together at a distance that largely ignored many-to-many dialogue. From the start we built efficient systems for print delivery that are still part of our structural DNA. Quite apart from the harmful cultural effects, such processes are highly inefficient for most online teaching methods, with processing and transactional costs that are completely unjustifiable and artificial constraints on flexibility and creativity that are positively heinous. It has often taken three years or more from a course being written to being in production. If you are a teacher it's like making someone with a red flag walk in front of your car, doubly infuriating because there are plenty of others that zoom by at whatever speed they like. Unfortunately the systems are so tightly integrated, embedded not just in processes but the skill-sets and assumptions of employees and the buildings and tools that we use, that they cannot be easily dismantled. Take away one piece and others fall, so change is painful.

It is not helped by the generally excellent fact that we have steadfastly stuck to a self-paced model of course design in our undergraduate teaching: students have a six-month contract and can choose when they learn at whatever pace suits them. This is brilliant from the perspective of freedom and choice, and is one of our great differentiators that marks us out from the crowd and that makes AU a really cool institution. But it makes it really hard to change things on the fly if you also build highly structured courses, and it makes it awfully tricky to make use of social pedagogies, because everyone is working on something different at any given time, which reduces the chances that we will destructure the process. As a tutor or coordinator, you cannot adapt and change a highly structured course while it is running because it could disadvantage students who are currently taking it - even spelling errors or referencing errors are hard to amend and have often taken months to fix as a result.

And it is not helped by processes that make communication slow, procedure-laden and clumsy, nor that  programs have to be approved by micro-managing government (our PhD in science has taken 6 years and counting to get approval).

The lack of habitual contact and dialogue and consequent rigidity of structure also means that the humans in the system are more likely to be treated as replaceable parts with fixed roles to play and value to the system, rather than creative, active smart people, capable of making good decisions and adapting and changing their environments, of value in themselves. Notably, rigid systems are built that intentionally limit the potential for people to make mistakes which, at the same time, means they also prevent those people from achieving greatness and devalue their agency as human beings.

And then, of course, there is the tutor problem. Tutors would often like to be more involved in bringing about change but, when offered the opportunity, quite reasonably expect to be paid to attend meetings, normally at an hourly rate plus expenses: that's what happens when you treat people as cogs in a structure instead of people engaged on a shared endeavour. If you are trying to develop a collegiate atmosphere and passionately engaged tutors, especially with faculty who are not paid this way, this does not help foster communication and sharing. Knowledge transfer is particularly tricky.

Another side-effect of this is that a number of very different and quite isolated cultures develop: the confused and contradictory responses to Tony's post are symptomatic of this. We do not even begin to have a single world view, with each faculty, school and centre implementing a different interpretation of what those within it think distance teaching should involve. Such diversity is a great thing and we need to cherish it dearly, but it's important that we understand one another's differences and learn from one another. Most of us see a part and imagine that the whole is much the same. It is not. Our cultural separation is further exacerbated by a concentration of administration and technical staff in one location, a tiny village with a hiring pool the size of a peanut (with some great people despite that, but not enough of them), and academic staff mainly in Edmonton or Calgary but also distributed much further afield. The fact that some meet face to face and some don't polarizes cultures further.

The good news

Happily, this sorry state of affairs is changing, thanks in part to things like this site, The Landing, and better use of other social tools, and in part to the fact that, after the best part of a decade spent trying to loosen the awful structures that bind us, progress is at last being made.

In terms of communication, there is a better awareness of others developing. We are getting better at working together online, and smarter at sustaining relationships at a distance. Tools like the Landing have helped cultures to mingle a little more: Carmen's photos of Athabasca here on the Landing, for instance, have really helped me to get a richer sense of what it is like to live there, as well as a greater sense of connectedness and social presence with people who work there. Similarly, it lets people see how others think and work in their teaching and research. Until the Landing came along, our systems had always been structured with an instrumental, functional focus, without embedding social engagement or giving any thought at all as to how cultures develop. Alternatively, (like email or telephone) they were so soft, flexible and ephemeral that the effort needed to manage them was too great to be of much use. But now things are improving, cultures are mingling and codeveloping more. Small groups are able to work together faster, form and unform flexibly, and share their ideas with other people easily. We can play more. Better webmeeting tools help too, particularly now that adequate bandwidth enables relatively high-fidelity audio and video, though reliability remains a big problem and it has taken some years for use of such tools to become a widespread norm.  Better online communication is becoming far more embedded not just in places like this site but also in Moodle courses, and even in greater sophistication of interactions via more basic tools like mailing lists and email. More people are getting more comfortable and proficient at communicating online, and the tools are improving.  Courses are becoming more social and more flexible: we are developing more effective and often innovative social pedagogies that focus on sharing and cooperative work rather than collaboration and guided dialogue, especially in those that are using the Landing. We have a long way to go yet and we need to loosen up a lot more of our structures before we can make great progress, but we are making progress.

In terms of process, things are looking up and improving quite rapidly. Within the last year, following a decade of very slow planning, we have adopted much more flexible policies and a better distribution of creative and skilled support staff. It is still rarely a particularly agile process, apart from in graduate teaching where more flexibility has long been allowed, but there is at last a bit of decentralization going on that means the right people get to work with the right people without the interference of process. Appropriate methods and technologies can now be chosen for different courses, rather than having to be crammed into a one-size-fits-all structure. Diverse approaches that were formerly stamped on or quietly ignored are not just allowed but encouraged. Responsibilities are being devolving to people that care deeply and can act accountably. Even for our structured courses, after years of deliberation, we are oh so slowly moving to a more adaptable framework, technology and policies that make timely changes within sensible limits possible. There is good reason to expect that we might leave the worst of our rigidity behind us soon. Social systems like the Landing, are helping to catalyze and support that change.

Blossoming, but not yet blooming

We can and do innovate prodigiously in the field of learning technologies and online pedagogies, despite all the difficulties, and even despite the swingeing financial cuts that have hurt us deeply. I have highlighted the role of The Landing,  my pet project that Terry Anderson and I have sweated blood to make happen. We have built this in an attempt to bring about changes to our over-prescriptive processes and rigid culture, to become a more human learning organization, to amplify our strengths and diminish our weaknesses. It's not a solution by any means, but it's an important foundation for other solutions that is having a perceptible impact.  For at least some of the several thousand people that use the site it has been a game changer, providing control, connection and relatedness, new ways of learning and teaching, filling gaps between hard systems. It offers direct control to people, all people at AU, without hierarchies and structural constraints getting in the way. It lets us know our students better, lets them know us. It provides a space for dialogue and sharing that can overcome some of the structural brittleness of the university's systems, linking diverse cultures and allowing people to be at least aware of what other people are doing and how they are thinking, in a safe open space. It lets people learn together, from one another, with one another. One of the main reasons I know that great things are being done at AU, for example, is because I see them happening on the Landing. But this is not the end of the story. The Landing relies on passionate advocates and a lot of voluntary effort from a few individuals who swim against the stream. The Landing or something like it (I'm not proud - there are other ways to achieve similar goals and it's the goals I am after, not the tools for getting there) should be a major pillar of the university but is instead something that needs to be fought for every day and that has lived on a diminishing shoestring budget for a year now, after getting some one-off external funding that sustained it for its first three years. Moreover, it is another island, not yet the pervasive glue to hold things together that we are aiming for, used by less than 20% of the university population, albeit including most of the staff. This is a chicken and egg problem: it becomes particularly useful if everyone is here, but why would you join in unless at least most people are here?  And, without strong commitment from the university to its future and in troubled financial times, why invest in building content and relationships in a system that might not survive the next round of cuts? It will survive, but perceptions count.

The Landing is just one of AU's countless innovations. A Second-Life island, now migrating to an OpenSIM container, has provided opportunities for innovative online learning and engagement but, again, it remains a small and largely isolated project driven by passionate people who are running uphill. Adaptive systems and analytics tools emerging from our research have made their way into some of our Moodle instances, those bastions of rigid hierarchically managed conventionality, though they have not spread widely through the university and remain an interesting backwater sustained only by a few passionate researchers, despite extremely innovative and active research in these and related areas that draws together an exceptionally talented pool of researchers. Our e-lab, home to a range of useful innovations, struggles to survive and is, as I write this, treading water or possibly drowning, despite having implemented an e-portfolio system that is used for teaching by several courses and programs, enabling more interesting pedagogies for many people. A crowd-driven social annotation system lives only because of one or two people, with little support or sustenance from the university. A lot of fantastic research does more good for other institutions than our own - brilliant work on semantic technologies for learning from Dragan Gasevic, or mobile learning from Mohammed Ally, for example, has had more impact abroad than in our own university, and the Landing itself gets 6-8 times as many external visitors as internal users. Rory McGreal is a leading light in the OER movement but our own open courses are pitifully few. George Siemens is justly famous for his theoretical and practical contributions to online learning, among the most significant thought-leaders in the world, and has brought in large amounts of funding and kudos to the university, yet our rigid structural processes meant that only after significant struggles were we able to offer him an assistant professor post.  TEKRI, our flagship research institute for online learning that might drive adoption and coordinate change,  currently limps along without doctoral students and little funding. This is just a tiny representative handful of the many learning technology innovations and innovators that continue to blossom but not quite bloom around the university, and doesn't even begin to consider the vast numbers of great pedagogical innovations that are occurring across the board, from nursing and midwifery to astrophysics, from distance education to computing. Despite seeking them out, the limited communication that causes the problems also makes it hard for me to know about more than a few of these, but I see them wherever and whenever I look.

We are extremely good at innovating. We have a pretty decent internal grant system that makes it possible to try out small learning technology and pedagogical projects fairly easily, and we are not unsuccessful in getting larger grants to implement bigger changes. But our culture of structure, our industrial teaching model and our still too-weak, instrumentally oriented communication channels make it very hard to diffuse and sustain such innovations effectively, especially when our executive leadership is perceived as being weak, secretive and untrustworthy (whether that perception is fair or not). When our focus is on courses and each course can take several years to go from planning to production, pedagogical change is hard to bring about at a fundamental level. Good ideas get lost in isolated spaces or run out of steam as people move on. I hope that the improvements in pace we are starting to see might change this, especially if we can continue to grow the Landing and tools like it to improve the flow of knowledge and culture around the university. AU has hit some hard times in the last year or so and a lot of stupid decisions have been made.  Sadly, a lot of what remains of our slashed resources are currently being channeled into more structure that will not move us pedagogically forward, that harden systems and that may, from a cultural perspective, make things worse, but this will pass. I think the outlook, though still cloudy, has some blue sky peeking through, as long as we can continue to build in the same directions we have started building and fight so that we don't lose what we have already worked so hard to begin.

In conclusion

Tony likens AU to the Titanic, cruising directly in the direction of an iceberg. I don't think that is quite right. It is more like a railroad company that built its tracks to towns that few people want to visit any more, and that is struggling to retool itself for an age where everyone else has personal automobiles, buses and airplanes, while suffering the burden of having to maintain the now-redundant steam-driven rolling stock and rusting lines that once made it successful. There is still a vital role for trains. They are very efficient, fast, comfortable, reliable, predictable, and (potentially) sociable and cost-effective vehicles, if they go to the right places. But we should also be looking to diversify into other forms of transport because trains can't take us everywhere we need to go when we need to go there and are certainly not always the most cost-effective alternative, as anyone who has looked at Rocky Mountaineer prices will agree. Sticking with the transport metaphor, living in Vancouver I have no car because I don't need one: an integrated transit system with trains, buses, airplanes, subways, float planes, ferries, helicopters, water taxis, bike lanes, cycle rickshaws, shuttle buses, streetcars, gondolas, ski lifts, taxis and car co-ops works remarkably well. The key lies in valorization of diversity combined with good communication, and a concerted will to make it work. Let's hope we can figure this out at AU. If we do not continue to turn this round, we will only wind up going to ghost towns on rusty tracks in train carriages that smell weird and have seen better days.

Reflection on world’s first Online 3 Minute Thesis Contest

Terry Anderson's blog - January 29, 2014 - 10:23
Being a huge fan of succinct communications and plain language, I was drawn to the ideas behind the 3 Minute Thesis contests, developed originally (and trademarked??) at the University of Queensland in Australia and now supported international from http://threeminutethesis.org/.  Since their development in 2008 the idea has spread globally with 3 minute thesis contests happening […]

Reply to Tony Bates' criticism of AU Call Centre Model

Rory McGreal's blog - January 29, 2014 - 03:26

Tony Bates' article is here

TB>>  is an excerpt from Tony's article

RMc>>> is my response

My response:

 Tony,

You have provoked me to a response:

TB>> “Some might call this the Telus or Bell system of phone service customer support, and we know how well that works.”

RMc>>> Let’s get real. Few people remember how well the previous system worked at Bell. I do. And I know that it worked very poorly. Bell NEVER provided the personal support that you are talking about. You often waited ages for answers to simple questions. At least with their call centre approach you can now get standard answers. For more unusual questions, you are transferred to someone who knows about the issue. In the past, you would eventually get a person who also would then transfer you to someone who possibly had answers. So, their call centre model is better for simple queries and at least just as good for more complex ones.

TB>>“Under the previous tutorial system, a student has direct contact with someone teaching the course, and the tutor can initiate contact.”

RMc>>> This is not exactly right. The student could try to contact the tutor and leave a message for call back in most cases under the previous system. Sometimes, they might catch the tutor on the first call.  So, to call it “direct contact” is not quite accurate. In the call centre, they will reach a professional immediately. This professional, unlike the tutor, will have training in the most common questions, queries, concerns that student have regarding administration, schedules, programme requirements, etc.  If it is a subject area related question, they will be immediately directed to an appropriate tutor.  Or, they can use email for more direct contact.  Previously, and even now in most cases, when a student asks a non-subject area question, the tutor directs them to a professional.

TB>> “This is clearly an attempt to save money” – Bob Barnetson.

RMc>>> Of course it is. And why is it a problem when a cash-strapped university (or even one that is ok financially) attempts to save money? Why pay more for an outdated system when a more economical one is available? As an advisor on costing, I would have thought that this would be the one aspect of the model that you would support.

TB>> “the research on this issue is clear: the earlier students receive a response to a question, the better their performance, and the less likely they are to drop out.”

RMc>>> The call centre model is especially designed to provide students with the response they need as soon as possible. The previous tutor model allowed for a reasonable call back time of 48hrs. This is no longer acceptable. Students demand the response they need when they need it.

TB>>  “many of its undergraduate programs are still mainly print or text-based, a costly and antiquated model, supported by tutoring”

RMc>>>How does this jive with your previous comments supporting this tutor-based “antiquated model” over a more technologically advanced and adaptive one?

TB>> "Rather than undertaking a general review of its undergraduate teaching with an attempt to develop more interactive, online programs, more emphasis on social learning, and more flexible course designs, it is tinkering with what it sees as the most expensive part of its program delivery."

RMc>>> I can only agree with your call for a review and support for updating our programs. However, the call centre is NOT “tinkering”. It is a substantive and reasonable adaptation to the online environment. It has been successfully tested in one centre for many years; it is now being tested in another centre. If anything, we have been too slow in adapting.  -- Your Titanic analogy is a bit of a stretch, but yes AU has to change course.

All the best.

Rory

Reply to Tony Bates' criticism of AU Call Centre Model

Rory McGreal's blog - January 29, 2014 - 03:26

Tony Bates' article is here

TB>>  is an excerpt from Tony's article

RMc>>> is my response

My response:

 Tony,

You have provoked me to a response:

TB>> “Some might call this the Telus or Bell system of phone service customer support, and we know how well that works.”

RMc>>> Let’s get real. Few people remember how well the previous system worked at Bell. I do. And I know that it worked very poorly. Bell NEVER provided the personal support that you are talking about. You often waited ages for answers to simple questions. At least with their call centre approach you can now get standard answers. For more unusual questions, you are transferred to someone who knows about the issue. In the past, you would eventually get a person who also would then transfer you to someone who possibly had answers. So, their call centre model is better for simple queries and at least just as good for more complex ones.

TB>>“Under the previous tutorial system, a student has direct contact with someone teaching the course, and the tutor can initiate contact.”

RMc>>> This is not exactly right. The student could try to contact the tutor and leave a message for call back in most cases under the previous system. Sometimes, they might catch the tutor on the first call.  So, to call it “direct contact” is not quite accurate. In the call centre, they will reach a professional immediately. This professional, unlike the tutor, will have training in the most common questions, queries, concerns that student have regarding administration, schedules, programme requirements, etc.  If it is a subject area related question, they will be immediately directed to an appropriate tutor.  Or, they can use email for more direct contact.  Previously, and even now in most cases, when a student asks a non-subject area question, the tutor directs them to a professional.

TB>> “This is clearly an attempt to save money” – Bob Barnetson.

RMc>>> Of course it is. And why is it a problem when a cash-strapped university (or even one that is ok financially) attempts to save money? Why pay more for an outdated system when a more economical one is available? As an advisor on costing, I would have thought that this would be the one aspect of the model that you would support.

TB>> “the research on this issue is clear: the earlier students receive a response to a question, the better their performance, and the less likely they are to drop out.”

RMc>>> The call centre model is especially designed to provide students with the response they need as soon as possible. The previous tutor model allowed for a reasonable call back time of 48hrs. This is no longer acceptable. Students demand the response they need when they need it.

TB>>  “many of its undergraduate programs are still mainly print or text-based, a costly and antiquated model, supported by tutoring”

RMc>>>How does this jive with your previous comments supporting this tutor-based “antiquated model” over a more technologically advanced and adaptive one?

TB>> "Rather than undertaking a general review of its undergraduate teaching with an attempt to develop more interactive, online programs, more emphasis on social learning, and more flexible course designs, it is tinkering with what it sees as the most expensive part of its program delivery."

RMc>>> I can only agree with your call for a review and support for updating our programs. However, the call centre is NOT “tinkering”. It is a substantive and reasonable adaptation to the online environment. It has been successfully tested in one centre for many years; it is now being tested in another centre. If anything, we have been too slow in adapting.  -- Your Titanic analogy is a bit of a stretch, but yes AU has to change course.

All the best.

Rory

The challenge of coherence

elearnspace (George Siemens) - January 27, 2014 - 17:18

I’ve been thinking about coherence formation in the learning process for many years (it was a key topic of my phd). Traditionally, coherence of knowledge is formed by the educator through her selection of readings and lectures. The assumption underpinning learning design is something like “decide what’s important and then decide how to best teach it or foster learning activities around it”. When students take a formal course, success is measured by how well they internalize (whatever that means) and repeat back to us what we told them. Most grading and evaluation happens at the intersection where comparisons are made between what the student can demonstrate in relation to what has been taught.

As students advance through their studies, they are asked to begin contributing new knowledge. There aren’t any clear lines around when students should start contributing instead of consuming, but masters level learning is a common demarcation point. I’m drawn more to the work of Bereiter and Scardamalia and their emphasis of knowledge building at all levels of learning, including primary/secondary levels.

I’ve found it difficult to articulate coherence provided by educators in contrast with coherence formed by learners and the growing role of the internet in fragmenting previous models of coherence. Most courses that I teach now do not rely exclusively on one or two texts. Instead, a bricolage of readings, videos, and other mutlimedia resources form course content. This fragmentation, however, generates a lack of coherence. Learning is the process of creating coherence – of seeing how pieces (ideas, concepts) are connected. I found the best description of this process in a recent article about Hola (while most articles about Hola emphasize “a way to get blocked content”, a simple definition is difficult. Hola does a variety of things: peer to peer content sharing, sharing idle computing capacity, VPN, a way to circumvent blocked content, etc). I’ll take it a step beyond and say that this is the most prescient statement regarding the future of learning that I have read in years:

Our processing power has increased so much faster than our networking speed that it’s easier to piece together stuff from all these nodes than to get a coherent piece of media from far away on the network

How I am learning to code

Jon Dron's blog - January 13, 2014 - 12:22

During my sabbatical I am learning to program. Given that I have been programming, on and off, for about 30 years (actually, more than 40 if you count a few weeks learning machine code with a single punchcard machine in primary school before picking it up again in my early 20s) and have been known to teach the subject, this may seem like an odd thing to want to do.

The thing is, though I do know a reasonable amount about the theory and best practices of programming, when it comes to actually writing code I have always been a hacker rather than a programmer. I actually think that hacking is a more useful skill than programming, but it's a complementary skill, not a replacement, and it comes at a cost. Hacked code is seldom maintainable, is seldom easy to make with others, and is seldom super-efficient, unless the hacking process is focused on efficiency, in which case it usually comes at the cost of even less maintainability. I can mostly make a program do what I want it to do but I have seldom put in much time to really learning to do it right. On a more esoteric level, which is the deeper agenda of my sabbatical research, I am extremely interested in ways to research using programming as the research method, so it is useful to have a deeper understanding of the subject and greater experience of the ways it can be enacted. I think there are ways of programming that can be an exploration of the unknown as much as any form of research, and I would like to build a robust methodology for doing that. In fact, that is the real research agenda behind what I am doing.

The conventional approach

My initial plan was to take a few MOOCs to teach me more about some core technologies - xMOOCs, to be precise: large-scale but highly conventional courses that have a strong structure, restrictive processes, clear learning outcomes and tight scheduling, whose limited concessions to online pedagogy are mostly concerned with splitting up a lecture into smaller chunks. I also considered some of the paid varieties that offer a more self-paced but otherwise similar approach.

This was not a bright idea, for reasons that I have written and spoken about many times, so I really should have known better. In fact, my entire teaching strategy for well over a decade has been a strong implicit criticism of the methods used in xMOOCs and courses of their ilk, which is to say the majority of courses, so it was weird of me to think that I might overcome their weaknesses and find them useful.

Enrolling on MOOCs was an interesting if unfulfilling experience. In the field of programming I am a slight outlier for typical MOOCs because I know a great deal about some things at many levels, and very little about others. This means that the bulk of courses for beginners bore me senseless but, for courses aimed at intermediate and advanced programmers, I am way out of my depth in places while finding other parts dull and obvious. Too smart to start, too dumb to progress. Naturally, after a week or two on a smattering of courses, I gave up. This was a waste of my time and highly demotivating. But I am just a slightly more exaggerated variant on the typical taker of any course, MOOC or otherwise. Any content-oriented, objectives-driven, structured course of a conventional nature has to aim at a particular kind of learner with a particular set of knowledge and skills. The chances that any actual learner is exactly like that idealised model are vanishingly slim. A good teacher and a strong learning community can take the edge off that and allow a bit of flexibility and adaptation, but it's a structural flaw in conventional courses that always raises its ugly head unless we adopt pedagogies that are very challenging to scale well to MOOC sizes, or build highly intelligent adaptivity into the course - much more intelligent than any I have seen in a decade or two of close involvement with the adapative hypermedia community. 

I have tried books, and got some value out of them. I find most classic textbook style approaches mind-numbing for much the same reasons as I don't like xMOOCs, but the ability to skip and cherry pick means they can be moderately useful, though they are seldom worth the cost, both in money and time. Random access combined with no pressure to submit assignments is a fairly acceptable combination, but it is not particularly efficient and far from ideal.

Another way

So what is my solution? I will be posting a bit more on this over the coming months, but this is what I have been doing so far.

Needs

First of all the needs: my most pressing need is to learn to program Elgg, the framework on which the Landing is based and that underpins a couple of other learning-oriented sites I am building, so that is my central focus. I already have some pretty clear ideas about some of the things I want to do with that though, in accordance with my research focus, my assumption is that this will change as I progress and exend and/or discover the adjacent possible. The more I learn, the more the adjacent possible will expand. My intent is to soften my horizons and extend them with every new thing I discover.  That is why I am learning to program and that is why I think programming could be a great research methodology, not just in things like artificial life, large-scale simulations and analytics, but as a means of exploration in itself.

Elgg is written in PHP, that I know fairly well, along with HTML, CSS, JQuery, MySQL and a few other things with which I have a passing familiarity. However, my skills are pretty rusty in places and, more importantly, knowing these things is just a prerequisite before hitting the big learning hurdle: Elgg itself is a very complex set of APIs, methods, patterns and procedures, with thousands of functions, classes and other parts of its extensive library. Simply hacking the code doesn't work if you want to produce maintainable code that plays nicely with the countless other Elgg plugins and its core. You have to know the library inside out and to have a firm grasp of the core API. Elgg is at least as complex and subtle as any programming language, but it operates at a higher level. It's the difference between building with raw materials and a decent set of tools, and making something out of Lego Technik. You start at a higher level and can achieve more, faster, albeit with some notable constraints based on the pieces available. Of course,  you can make your own pieces once you hit that barrier.

Conventional solutions

Learning Elgg is a little challenging. There is only one fairly basic book on Elgg, plus a basic wiki-based (now Github-hosted) introductory website and a bunch of technical documentation, and there are no courses on the topic that I am aware of: the courses I enrolled on were about pieces like JQuery, PHP, etc. There are also a few bloggers that occasionally share hints and tips. However, there are quite a few thousand examples of plugins to learn from, so the classic approach to learning Elgg is to pull apart some plugins, modify them, and use the ideas and techniques to build your own. Unfortunately, many of the thousands of available plugins are badly written hacks, even some of those found in the core plugins, so it is quite hard to judge the quality of each solution, especially as most work as intended so it is not obvious that they are doing things badly. Learning this way is not a mechanical process of learning a skill so much as a creative and critical interpretation of a range of possible solutions. It takes skill to gain skill, which makes it fairly painful, and not too easy to make the transition from 'hello world' to something useful.

Google

As for most Internet users, Google is my first and best friend on my learning journey. Finding stuff on the Elgg community site (itself written in Elgg) is not great and sticking to the site itself means failing to find solutions elsewhere, so Google's intelligent crowd-fed algorithms and high speed are an ideal combination.  In fact, the native Elgg search that used to be provided for the Elgg community site has been replaced with Google, albeit limited to the site itself, because it works much better. Google is the world's most successful and powerful learning technology and I love it. 

Help, advice and inspiration

Typically, Google leads me to two main classes of site, beyond the Elgg community and Elgg documentation sites. The first class of site is crowd-based. It is most well represented by Stack Overflow, the world's greatest resource for programmers seeking solutions to problems or starting points for further exploration. It is great for very specific issues as well as broader tactical or strategic concerns. What I particularly like about Stack Overflow is that the best solutions bubble to the top naturally through its various rating mechanisms, but it is always possible to explore alternatives and to find (and even engage in) dialogue about why one solution is better than another. It is great for finding solutions to specific problems but at the same time allowing depth and multiple perspectives to help me learn more. It is a classic example of the wisdom of a crowd, utilizing a huge set of interested parties combined with efficient collective algorithms to draw attention to the best solutions.

Another crowd-based approach that offers similar value for slightly different reasons is the official PHP manual, which provides not only direct help but also copious conversations and examples provided by the community. Unless you know precisely what you are looking for it is nothing like as effective as Stack Overflow at finding good solutions to specific conundrums quickly because it lacks the collective algorithms that drive good solutions to the top. This is not to say it is less useful though - it is just different. It takes more effort to sift through the conversations but, because the manual keeps the topics very tightly delimited, it is mostly relevant and you get to learn a lot of other things along the way and to explore the command or function a conversation refers to in a fair amount of depth. Not all discussions and examples are immediately useful, but it helps to develop a mindset, to think as programmers think.  

The second class of site returned by Google is the short intentionally designed tutorial, most well represented by W3Schools. W3Schools is a venerable tutorial site for Web programming that I have been recommending to students for about a decade now. It is both a reference and a hands-on active experimentation site, that keeps to a very simple and effective pattern for teaching that just works. Knowing a little about what it is teaching is helpful because I can recognize its limitations: it does not always encourage best practices, is a poor guide on its own, and it has a fair number of legacy tutorials that pop up from time to time. But it is about as good as it gets if you know what you are looking for but you are not sure of the details.

I'm surprised that I only rarely bump into video tutorials like those most obviously represented by the Khan Academy, but this may be the result of Google learning my preferences, which tend not to be for video. Though video can sometimes be helpful when learning complex interfaces and controls, and can be good for getting inspired, unless it is incredibly short (a minute or two at most) video is a very slow way to learn most of what I need to learn: it provides nothing like enough random access to precisely what I need. At a small scale, video suffers the same kind of problems as courses do at a larger scale: too much irrelevant stuff, too little control. With some important exceptions, it typically provides the illusion of being engaged in learning without actually getting the job done, with a lot of unwanted padding. I rely heavily on video when I am learning a musical instrument, but it is not great for learning to program when you already know most of what it is telling you.

Getting it wrong

Trial and error, combined with a reflective process to ensure lessons are learned, are important parts of the process and among the most basic and crucial of human learning strategies. Finding out how and why some things work better than others, and many things do not work at all, takes up a great deal of my time. While I can often code 90% of a program in a couple of hours, I can and do spend days on a couple of lines or one tiny piece of functionality that can sometimes result in a complete rewrite of everything when the reason for the problem is revealed, or that may equally be down to a single missing bit of punctuation or misspelt variable. This is and has always been true for all the code I have ever written. This would be frustrating were I not learning as I go, and that learning is the main purpose of what I am doing. It's a conversation with the machine, in which the machine is an important teacher. I've found it useful to make a note of what I have discovered along the way, and/or to keep snippets of code in my IDE (integrated development environment) for future use. This is a difficult bit of discipline to maintain: it is awfully tempting to leap from one discovery to the next until a solution is reached, without working on retaining the learning that occurs along the way.

Our tools shape us

Finding the right IDE was an important step on this learning journey. After trying old open source chestnuts like Eclipse and Netbeans, as well as simpler faster tools like Taco and TextWrangler (both of which I like and use for many things) combined with other tools for versioning, documentation, searching and source control, I eventually discovered, through a blog post from an Elgg developer that I know and greatly respectCoda. If you are a Mac user who is trying to program, I cannot recommend this too highly. It is worth every penny of the few tens of dollars it costs and I will even forgive that it is not open source because it works so well. Coda provides a very elegant environment in which I can do most things I need, including searching documentation for most programming languages, handling version control, autocompletion of code, debugging, running an SSH terminal, managing files locally and remotely, and a whole lot more. As much as a tool it is a teacher, making good practice easy to accomplish, with complex things simplified to provide the least resistance. From simple things like knowing which functions and variables you have already used and offering them as suggestions or intelligently indenting code, to more complex things like making the overall structure visible and accessible, it nudges you to work smartly. It doesn't force you to work in a particular way, but it makes it easier to do things properly. While most good IDEs offer such features, I have never found one that does it so fluidly and transparently as Coda.

Coda does everything but it is not always the best or most efficient tool for the job. When Coda reaches its limits other desktop tools are very useful. Simple desktop search is a big one, given that I have a lot of different code scattered in several places, and the Mac is super fast at finding pretty much any text in a wide variety of documents.  I also use dedicated versioning apps (Versions and Github clients) and a dedicated FTP/SFTP tool, Cyberduck. These dedicated tools have some more sophisticated features that come in handy sometimes. The local command line is another vital element of the mix because, sometimes, things are many times quicker and easier that way. I have various widgets that do things like generate lorem ipsum for testing too, as well as a wide range of browser plugins that help with debugging. My favourite of these is the Firefox Web Developer Toolbar, that I have used for many years and that saves me hours of debugging effort a week. Various developer plugins for Elgg also help a lot. Again, these tools are not just about efficiency but also about learning: the things they do help to shape the way I see and understand problems. They embed expertise.

The physical environment helps too. Multiple monitors are great for programming because you often find yourself working with multiple files and tools at the same time. I normally have three or more browsers working at once for code testing, for instance. This is about not letting the mechanics of the technology get in the way of learning. Sustaining a train of thought is simpler if it is all laid out in front of you.  I have thought about but not yet found a way of implementing at a reasonable price an e-paper wall, essentially an entire wall that is completely configurable to display anything at fine resolution - that would be roughly the perfect size for working on learning projects in general, but especially for software development. A monitor or projector is too bright, though backlighting might be useful. Similarly, it is good to have a fast computer and a fast network. I don't know exactly how much time I spend waiting for the machines but it is a lot. Not all of it is wasted - those enforced pauses are a bit like smoking breaks used to be, a chance to compose yourself and reflect. 

Planned serendipity and opportunism

My learning is opportunistic and intenionally serendipitous, partly as a result of discovering things along the way in things like Stack Overflow conversations, but mainly because of subscribing to Elgg community groups for developers and beginners. These are full of ideas and techniques, conversations that draw my attention to things I had not noticed or didn't think were relevant, ways of using Elgg to do things I didn't think of or that did not seem important. 99% of it is a distraction and irrelevant, but that 1% is great.  At first, this was mostly just a bunch of messages from a set of largely undifferentiated people but now I am aware of a few smart people, am coming to recognize the idiots and the genii, and am being drawn gently into a network. There are many occasional visitors, strangers and outsiders here, but there is also a very perceptable and fairly consistent set of persistent contributors. It is both a large set of people who share nothing but an interest in common, and a smaller network of people who know something about one another, as well as a more cohesive group who collaborate to develop the core. Each social form supports the rest, all coexisting happily. It is not a single collection of people but a lot of overlapping ones, with varying degrees of social connection and/or commitment. This means that the perils of group-think and the problems of filter bubbles in networks of people you know are very much softened.

Another big part of opportunistic learning will form the basis of a later post, probably a paper or two, and is a lot of the reason I am spending my sabbatical doing this. Creating something (anything) opens up opportunities to create more that would not have been possible or simply wouldn't have occurred as something to do before creating it. It increases the adjacent possible. To give a very simple and banal illustration of this, I recently created a group tag menu plugin. The initial specification for this was to allow group owners to provide a list of tags that would be displayed as a menu in the side bar of a group, clickable to show all things in the group tagged with the selected tag. The reasoning was to allow group owners to do things like provide a set of topics for a course, that would be automagically filled up by people in the group tagging things. I was basing the idea for this on an old Elgg plugin that had been abandoned by its creator five or six years ago, that did much the same thing for blog posts.   I realised very quickly that it would be useful and fairly simple to program to provide a default menu of the most popular tags in the group for those that did not want to create a manual list, so I added that option, in the process learning a bit more about Elgg's approach to metadata and annotation. As a result of this I then realised that it would take little extra effort to link the title (group tags) to a page showing a tag cloud of all the tags used by the group, so I added that too. Along the way I discovered that the basic search functionality of Elgg that I had intended to use was awful, so I wound up reimplementing that, again discovering new things about Elgg as I went - it seemed daunting but was actually incredibly easy. There is no particularly good reason that I could not have specified all of this at the start but it didn't occur to me to do so until I had created the plugin and saw what it did and how it behaved, and also what the code did and how it did it. Had I written it differently, these options might not have been easy and I might not have added them. This would have been a pity, because they actually turn out to be at least as useful as the specified functionality, and probably more so. What was happening here was an exploration of the design space that is caused by the design itself. Gould talked of exaptions, features of a design that are simply a byproduct of the way something is built or evolves, that later turn out to have some value in themselves. Another word commonly used for these is 'pre-adaptations'. The improvements I made are not quite exaptions in the strict sense used by Gould, but they are close cousins. A design that started with one intent reveals opportunities to do more that we can take advantage of. The same thing happens when we write stories or papers - what we write creates scaffolding that leads us in different directions than we originally planned, often much more interesting and better ones. My friend and mentor Richard Mitchell once told me 'I don't know what I think until I have written it' - writing is a way of scaffolding and creating thinking as much if not more than a way of writing what we think. Programming is like that, but on steroids, because we create things that behave, and that interact with people in very tangible ways. They become part of the conversation that creates knowledge and that builds further learning. It is probably a similar process for designers and architects though, on the whole and in some ways, the feedback loops for them are longer and more sparse. Programming is quick by comparison, and the results are visible instantaneously. It's maybe more like playing with clay, except that the clay we create is alive and does stuff. This echoes religious myths: programmers are gods in their tiny, virtual, isolated worlds that they create and populate with code.

Applied ignorance

Interestingly, the fact that I did not know much about the options at the start when building my group tag menu might have been an advantage, as Clay Shirky memorably opined in his essay The Bayesian Advantage of Youth. If you do not know what is possible it is equally true that you do not know what is impossible. And, of course, few things are actually impossible. Sometimes, knowing too much can be an impediment to learning more. Experts know the solutions to known problems and do not necessarily spend time exploring the problem space further. To be an expert is to see patterns, so anything that does not fit the pattern is either shoehorned into it or ignored. This is one mighty good reason to be interested in many things and to be a perpetual beginner at some things. Apart from constantly branching into new areas, I don't have a great solution to this problem yet. I am struck by the number of times I read in Elgg forums that 'x is impossible'. Of course it isn't - we are talking about Turing machines here that can be made to do anything that any other Turing machine can do. It is impossible until you approach it with a different mindset. This is another great reason for the support of crowds in learning, because diverse minds and different eyes see similar problems differently. 

The flipside of ignorance is that it is hard to know what questions to ask - framing a problem is nine tenths of the solution in most cases. And ignorance is frustrating: knowing that there is a solution but not knowing where to begin is not the most motivating thing in the world. There has to be a balance.

Github

Github is pure magic, plain and simple. It allows me to branch code by other developers, see in detail not just the code but also how they have developed it, to see multiple branches and multiple solutions and even, if I find a solution no one has yet discovered, to feed it back with a pull request. It lets me talk with developers, open issues, view timelines and branches with ease and sophistication. This is social networking without knowing the people involved, people working together as a team without collaboration, a remarkable and elegant way to harness a crowd to help the crowd to learn while at the same time building incredibly sophisticated software. It is revolutionary, it is brilliant. It is the next stage of evolution for things like Wikipedia, which is another of my favourite resources and often my starting point when one of the other tools presents me with a new technology or idea. I wish I had thought of it. Clay Shirky agrees

Friends 

There are a few individuals with whom I work more closely - friends, students, colleagues - with whom I can discuss ideas in detail, from whom I can gain inspiration and motivation, who can provide multiple perspectives and shared experiences. Mostly these do not tell me how to do something - they instead let me explore my own thinking and enhance it with their own. Without such people I would probably not bother doing any of this and, even if I did, the result would be poorer and less interesting.

Connectivism, distributed cognition and conditions for learning

The process I am following is most easily described as Connectivist. It is also and more fundamentally about distributed cognition. This is not simply a method for me to learn but a way of thinking about learning and knowledge that is fundamentally at odds with the structured process of education that we are familiar with. In my teaching I have tried to take advantage of such tools and methods but it is quite hard to do this properly when learning outcomes are fixed and prescribed, and assessment is aligned with and drives the course. Some of the elements that seem crucial here are:

  • learning happens best when you need to learn.
  • creation is a self-perpetuating process: the more you create, the more you are able to create.
  • people make things worthwhile - not just supporting the process, not just providing information, but creating meaning in it all.
  • problem-solving is not just about solving a problem - each solution is a scaffolding that lets you solve more problems as well as a creative engine that opens up more solutions and opportunities to you.
  • tools don't just help you to get things done - they shape the ways you do them, passing on and embedding the knowledge of their creators, and open out new avenues that would not be there without them.
  • there is no such thing as wasted time when you are actively engaged in learning and you are mindful of what you are doing and how you are learning. Efficiency is another issue: but efficient for what? 
  • diversity matters and different forms of social engagement have different value in providing that diversity.
  • crowds can teach in diverse ways.

I will return to some of these themes in later posts.

 

 

The vulnerability of learning

elearnspace (George Siemens) - January 13, 2014 - 11:17

In a meeting with a group of doctoral students last week, one individual shared her challenging, even emotionally draining, experience in taking her first doctoral course. Much of her experience was not focused on the learning or content. Instead, she shared her self-doubts, her frustrations of integrating doctoral studies into her personal and professional life, the fatigue of learning, and feeling overwhelmed. Personal reflections such as these are important but are usually not considered when discussing learning and being a successful learner.

In education, seemingly in tandem with the advancement of technology and online learning, growing emphasis is placed on making the learning process more efficient. Through a barrage of instructional techniques and technologies, researchers and administrators strive to reduce the time that it takes a learner master a topic or complete a degree. While this is a laudable goal, it is an impoverished and malnourished view of education.

Learning involves many dimensions, but triggered by my conversation with my doctoral students, two are relevant here: epistemological and ontological. Epistemology is concerned with knowledge. In the educational process, that means the focus is on helping students to learn the knowledge (concepts, ideas, relationships) that a teacher or designer has designated as being important. Most thinking on improving education centres on the epistemological aspect of learning. While epistemology addresses “knowing”, ontology is concerned with “being” or “becoming”. For many students, this is the most substantial barrier to learning. Our education system and teaching practices largely overlook ontological principles. Instead, the focus is on knowledge development at the expense of “learner becoming”.

Learning is vulnerability. When we learn, we make ourselves vulnerable. When we engage in learning, we communicate that we want to grow, to become better, to improve ourselves. When I first started blogging, I had a sense of fear with every post (“did that sound stupid?”), loss of sleep soul-searching when a critical comment was posted, and envy when peers posted something brilliant (“wow, why didn’t I think of that?”). When a student posts an opinion in a discussion forum or when someone offers a controversial opinion – these are vulnerability-inducing expressions. On a smaller scale, posting a tweet, sharing an image, or speaking into the void can be intimidating for a new user. (I’m less clear about how being vulnerable becomes craving attention for some people as they get immersed in media!). While the learning process can’t be short-circuited, and the ambiguity and messiness can’t be eliminated, it is helpful for educators to recognize the social, identity, and emotional factors that influence learners. Often, these factors matter more than content/knowledge elements in contributing to learner success.

Classrooms may one day learn us - but not yet

Jon Dron's blog - December 30, 2013 - 17:55

Thanks to Jim and several others who have recently brought my attention to IBM's rather grandiose claim that, in a few years, classrooms will learn us. The kinds of technology described in this article are not really very new. They have been just around the corner since the 60s and have been around in quantity since the early 90s when adaptive hypermedia (AH) and intelligent tutoring systems (ITS) rose to prominence, spawning a great many systems, and copious research reported on in hundreds of conferences, books and journal articles. A fair bit of my early work in the late 90s was on applying such things to an open corpus, which is the kind of thing that has blossomed (albeit indirectly) into the recently popular learning analytics movement. Learning analytics systems are essentially very similar to AH systems but mostly leave the adaptation stage of the process up to the learner and/or teacher and tend to focus more on presenting information about the learning process in a useful way than on acting on the results. I've maintained more than a passing interest in this area but I remain a little on the edge of the field because my ambitions for such tools have never been to direct the learning process. For me, this has always been about helping people to help one another to learn, not to tell them or advise them on how to learn, because people are, at least till now, the best teachers and an often-wasted resource. This seemed intuitively obvious to me from the start and, as a design pattern, it has served me well. Of late, I have begun to understand better why it works, hence this post.

The general principle behind any adaptive system for learning is that there are learners, some kind of content, and some means of adapting the content to the learners. This implies some kind of learner model and a means of mapping that to the content, although I believe (some disagree) that the learner model can be disembodied in constituent pieces and can even happily exist outside the systems we build, in the heads of learners. Learning analytics systems are generally all about the learner model and not much else, while adaptive systems also need a content model and a means of bringing the two together.  

Beyond some dedicated closed-corpus systems, there are some big obstacles to building effective adaptive systems for learning, or that support the learning process by tracking what we are doing.  It's not that these are bad ideas in principle - far from it. The problem is more to do with how they are automated and what they automate. Automation is a great idea when it works. If the tasks are very well defined and can be converted into algorithms that won't need to be changed too much over time, then it can save a lot of effort and let us do things we could not do before, with greater efficiency. If we automate the wrong things, use the wrong data, or get the automation a little wrong, we create at least as many problems as we solve. Learning management systems are a simple case in point: they automated abstracted versions of existing teaching practice, thus making it more likely that existing practices would be continued in an online setting, even though they had in many cases emerged for pragmatic rather than pedagogic reasons that made little sense in an online environment. In fact, the very process of abstraction made this more likely to happen. Worse, we make it very much harder to back out when we automate, because we tend to harden a system, making it less flexible and less resilient. We set in stone what used to be flexible and open. It's worse still if we centralize that, because then whole systems depend on what we have set in stone and you cannot implement big changes in any area without scrapping the whole thing. If the way we teach is wrong then it is crazy to try to automate it. Again, learning management systems show this in spades, as do many of the more popular xMOOC systems. They automate at least some of the wrong things (e.g. courses, grading, etc). So we had better be mighty sure about what we are automating and why we are doing it. And this is where things begin to look a bit worrying for IBM's 'vision'. At the heart of it is the assumption that classrooms, courses, grades and other paraphenalia of educational systems are all good ideas that are worth preserving. The problem here is that these evolved in an ecosystem that made them a sensible set of technologies at the time but that have very little to do with best practice or research into learning. This is not about learning - it is about propping up a poorly adapted system.

If we ignore the surrounding systems and start with a clean slate, then this should be a set of problems about learning. The first problem for learning analytics is to identify what are we should be analyzing, the second is to understand what the data mean and how to process them, the third to decide what to do about that. Our knowledge on all three stages is intermediate at best. There are issues concerning what to capture, what we can dicover about learners through the information we capture, and how we should use that knowledge to help them learn better. Central to all of this is what we actually know about education and what we have discovered works best - not just statistically or anecdotally, but for any and all individuals. Unfortunately, in education, the empirical knowledge we have to base this on is very weak indeed.

So far, the best we can come up with that is fairly generalizable (my favourite example being spaced learning) is typically only relevant to small and trivial learning tasks like memorization or simple skill acquisition. We're pretty good at figuring out how to teach simple things well, and ITS and AH systems have done a pretty fair job under such circumstances, where goals (seldom learning goals - more often proxies like marks on tests or retention rates) are very clear and/or learning outcomes very simple. As soon as we aim for more complex learning tasks, the vast majority of studies of education are either specific, qualitative and anecdotal, or broad and statistical, or (more often than should be the case) both. Neither is of much value when trying to create an algorithmic teacher, which is the explicit goal of AH and ITS, and is implied in the teaching/learning support systems provided by learning analytics.  

There are many patterns that we do know a lot about, though they don't help much here.  We know, for example, that one-to-one mastery teaching on average works really brilliantly - Bloom's 2-sigma challenge still stands, about 30 years after it was first made. One-to-one teaching is not a process that can be replicated algorithmically: it is simply a configuration of people that allows the participants to adapt, interact and exchange or co-develop knowledge with each other more effectively than configurations where there is less direct contact between people.  It lets learners express confusion or enthusiasm as directly as possible, and for the teacher to provide tailored responses, giving full and undistracted attention. It allows teachers to directly care both for the subject and for the student, and to express that caring effectively. It allows targeted teaching to occur, however that teaching might be enacted. It is great for motivation because it ticks all the boxes on what makes us self-motivated. But it is not a process and tells us nothing at all about how best to teach nor how best to learn in any way that can be automated, save that people can, on the whole, be pretty good at both, at least on average.

We also know that social constructivist models can, on average, be effective, for probably related reasons. it can also be a complete disaster. But fans of such approaches wilfully ignore the rather obvious fact that lots of people often learn very well indeed without them - the throwaway 'on average' covers a massive range of differences between real people, teachers and learners, and between the same people at different times in different contexts. This shouldn't come as a surprise because a lot of teaching leads to some learning and most teaching is neither one-to-one nor inspired by social constructivist thinking. Personally, I have learned phenomenal amounts, been inspired and discovered many things through pretty dreadful teaching technologies and processes, including books and lectures and even examined quizzes. Why does it work? Partly because how we are taught is not the same thing at all as how we learn. How you and I learn from the same book is probably completely different in myriad ways. Partly it is because it ain't what you do to teach but how you do it that makes the biggest difference. We do not yet have an effective algorithmic way of making or even identifying creative and meaningful decisions about what will help people to learn best - it is something that people and only people do well. Teachers can follow an identical course design with identical subject matter and turn it into a pile of junk or a work of art, depending on how they do it, how enthusiastic they are about it, how much eye contact they make, how they phrase it, how they pace it, their intonation, whether they turn to the wall, whether they remembered to shave, whether they stammer etc, etc, etc, and the same differentiators may work sometimes and not work others, may work for some people sometimes and not others. Sometimes, even awful teaching can lead to great learning, if the learners are interested and learn despite rather than because of the teacher, taking things into their own hands because the teaching is so awful. Teaching and learning, beyond simple memory and training tasks, are arts and not sciences. True, some techniques appear to work more often than not (but not always), but there is always a lot of mysterious stuff that is not replicable from one context to the next, save in general patterns and paradigms that are mostly not easily reduced to algorithms. It is over-ambitious to think that we can automate in software something we do not understand well enough to turn into an algorithm. Sure, we learn tricks and techniques, just like any artist, and it is possible to learn to be a good teacher just as it is possible to learn to be a good sculptor, painter or designer. We can learn much of what doesn't work, and methods for dealing with tricky situations, and even a few rules of thumb to help us to do it better and processes for learning from our mistakes. But, when it comes down to basics, it is a creative process that can be done well, badly or with inspiration, whether we follow rules of thumb or not, and it takes very little training to become proficient. Some of the best teachers I've ever known have used the worst techniques. I quite like the emphasis that Alexandra Cristea and others have put on designing good authoring environments for adaptive systems because they then become creative tools rather than ends in themselves, but a good authoring tool has, to date, proved elusive and far too few people are working on this problem.

'Nothing is less productive than to make more efficient what should not be done at all'. Peter Drucker

The proponents of learning analytics reckon they have an answer to this problem, by simply providing more information, better aggregated and more easily analyzed. It is still a creative and responsive teacher doing the teaching and/or a learner doing learning, so none of the craft or art is lost,  but now they have more information, more complete, more timely, better presented, to help them with the task so that they can do it better. The trouble is that, if the information is about the wrong things, it will be worse than useless. We have very little idea what works in education from a process point of view so we do not know what to collect or how to represent it, unless all we are doing is relying on proxies that are based on an underlying model that we know with absolute certainty is at least partly incorrect or, at best, is massively incomplete. Unless we can get a clearer idea of how education works, we are inevitably going to be making a system that we know to be flawed to be more efficient than it was. Unfortunately, it is not entirely clear where the flaws lie especially as what may be a flaw for one may not be for another, and a flaw in one context may be a positive benefit in another.  When performing analytics or building adaptive systems of any kind, we focus on proxies like grades, attention, time-on-task, and so on - things that we unthinkingly value in the broken system and that mean different things to different people in different contexts.  Peter Drucker made an important observation about this kind of thing:

'Nothing is less productive than to make more efficient what should not be done at all'.

A lot of systems of this nature improve the efficiency of bad ideas. Maybe they valorize behaviourist learning models and/or mediaeval or industrial forms of teaching. Maybe they increase the focus on grading. Maybe they rely on task-focused criteria that ignore deeper connective discoveries. Maybe they contain an implied knowledge model that is based on experts' views of a subject area, which does not normally equate to the best way to come by that knowledge. Maybe they assume that time on task matters or, just as bad, that less time spent learning means the system is working better (both and neither are true). Maybe they track progress through a system that, at its most basic level, is anti-educational. I have seen all these flaws and then some. The vast majority of tools are doing education-process analytics, not learning analytics. Even those systems that use a more open form of analytics which makes fewer assumptions about what should be measured, using data mining techniques to uncover hidden patterns, typically have risky systemic effects: they afford plentiful opportunities for filter bubbles, path dependencies, Matthew Effects and harmful feedback loops, for example. But there is a more fundamental difficulty for these systems.  Whenever you make a model it is, of necessity, a simplification, and the rules for simplification make a difference. Models are innately biased, but we need them, so the models have to be good. If we don't know what it is that works in the first place then we cannot have any idea whether the patterns we pick out and use to help people guide their learning journeys are a cause, an effect or a by-product of something else entirely. If we lack an explicit and accurate or useful model in the first place, we could just again be making something more efficient that should never be done at all. This is not to suggest that we should abandon the effort, because it might be a step to finding a better model, but it does suggest we should treat all findings gathered this way with extreme scepticism and care, as steps towards a model rather than an end in themselves.

In conclusion, from a computing perspective, we don't really know much about what to measure, we don't have great grounds for deciding how to process what we have measured, and we don't know much at all about how to respond to what we have processed. Real teachers and learners know this kind of thing and can make sense of the complexity because we don't just rely on algorithms to think. Well, OK, that's not necessarily entirely true, but the algorithms are likely at a neural network level as well as an abstract level and are probably combinatorially complex in ways we are not likely to understand for quite a while yet. It's thus a little early to be predicting a new generation of education. But it's a fascinating area to research that is full of opportunities to improve things, albeit with one important proviso: we should not be entrusting a significant amount of our learning to such systems just yet, at least not on a massive scale. If we do use them, it should be piecemeal and we should try diverse systems rather than centralizing or standardizing in ways that the likes of Knewton are trying to do. It's bit like putting a computer in charge of decisions whether or not to launch nuclear missiles. If the computer were amazingly smart, reliable and bug-free, in a way that no existing computer even approaches, it might make sense. If not, if we do not understand all the processes and ramifications of decisions that have to be made along the way, including ways to avoid mistakes, accidents and errors, it might be better to wait. If we cannot wait, then using a lot of different systems and judging their different outputs carefully might be a decent compromise. Either way, adaptive teaching and learning systems are undoubtedly a great idea, but they are, have long been, and should remain on the fringes until we have a much clearer idea of what they are supposed to be doing. 

McDonald's as a learning technology

Jon Dron's Landing blog - August 2, 2012 - 14:49

Whenever I visit a new country, region or city I visit McDonald's as soon as I can to have a Bic Mac and an orange juice. Actually, in Delhi that turns into a Big Raj (no beef on the menu) and in some places I substitute a wine or a beer for the orange juice, but the food is not really important. There are local differences but it's pretty much as horrible wherever you go.

I inflict this on myself because The McDonald's Experience should, on the whole, be a pretty consistent thing the world over: that's how it is designed. Except that it isn't the same. The differences, however, compared with the differences between one whole country or city and another, are relatively slight and that's precisely the point. The small differences make it much easier to spot them, and to focus on them, to understand their context and meaning. Differences in attitudes to cleaning, attitudes to serving, washroom etiquette, behaviour of customers, decor, menu, ambiance, care taken preparing or keeping the food etc are much easier to absorb and reflect upon than out on the street or in more culturally diverse cafes because they are more firmly anchored in what I already know. Tatty decor in McDonald's restaurants in otherwise shiny cities speak worlds about expectations and attitudes, open smiles or polite nods help to clarify social expectations and communication norms. Whether people clear their own tables, whether the dominant clientele are fat, or families, or writers, whether it's a proletarian crowd or full of intelligentsia or a place that youth hang out.  Whether people smoke, whether they drink. How loud the music (if any) is playing. The layout of the seating. How people greet their friends, how customers are greeted, how staff interact. How parents treat their children. There's a wide range of different more or less subtle clues that tell me more about the culture in 20 minutes than days spent engaging more directly with the culture of a new place. Like the use of  the Big Mac Index to compare economies,  the research McDonald's puts into making sure it fits in also provides a useful barometer to compare cultures.

McDonald's thus serves as a tool to make it easier to learn. This is about distributed cognition. McDonald's channels my learning, organises an otherwise disorganised world for me. It provides me with learning that is within my zone of proximal development. It helps me to make connections and comparisons that would otherwise be far more complex. It provides an abstract, simplified model of a complex subject.

It's a learning technology. 

Of course, if it were the only technology I used then there would be huge risks of drawing biased conclusions based on an outlier, or of misconstruing something as a cultural feature when it is simply the result of a policy that is misguidedly handed down from a different culture. However, it's a good start, a bit of scaffolding that lets me begin to make sense of confusion, that makes it easier to approach the maelstrom outside more easily, with a framework to understand it.

There are many lessons to be drawn from this when we turn our attention to intentionally designed learning technologies like schools, classrooms, playgrounds,  university websites, learning management systems, or this site, the Landing. Viewed as a learning technology about foreign culture, McDonald's is extraordinarily fit for purpose. It naturally simplifies and abstracts salient features of a culture, letting me connect my own conceptions and beliefs with something new, allowing me to concentrate on the unfamiliar in the context of the familiar. Something similar happens when we move from one familiar learning setting to the next. When we create a course space in, say, Moodle or Blackboard, we are using the same building blocks (in Blackboard's case, quite literally) as others using the same system, but we are infusing it with our own differences, our own beliefs, our own expectations. Done right, these can channel learners to think and behave differently, providing cues, expectations, implied beliefs, implied norms, to ease them from one familiar way of thinking into another. It can encourage ways of thinking that are useful, metacognitive strategies that are embedded in the space. Unfortunately, like McDonald's, the cognitive embodiment of the designed space is seldom what learning designers think about. Their focus tends to be on content and activities or, for more enlightened designers, on bending the tools to fit a predetermined pedagogy. Like McDonald's, the end result can be rather different from the intended message. I don't think that McDonald's is trying to teach me the wealth of lessons that I gain from visiting their outlets and, likewise, I don't think most learning designers are trying to tell me:

  • that learning discussions should be done in private places between consenting adults;
  • that it is such a social norm to cheat that it's worth highlighting on the first page of the study guide;
  • that teachers are not important enough to warrant an image or even an email link on the front page;
  • that students are expected to have so little control that, instead of informative links to study guide sections, they are simply provided with a unit number to guide their progress;
  • that the prescribed learning outcomes are more important than how they will be learned, the growth, and the change in understanding that will occur along the way.

And yet, too many times, that's what the environment is saying: in fact, it is often a result of the implied pedagogies of the technology itself that many such messages are sent and reinforced. The segregation of discussion into a separate space from content is among the worst offenders in this respect as that blocks one of the few escape routes for careful designers. Unless multi-way communication is embedded deeply into everything, as it is here on the Landing, then there is not even the saving grace of being able to see emergent cultural behaviours to soften and refine the hegemonies of a teacher-dominated system.

Like McDonald's, all of this makes it far more likely that you'll get a bland salty burger than haute cuisine or healthy food.

Collective values

Jon Dron's Landing blog - May 25, 2012 - 16:45

Terry Anderson and I have written a fair bit about the different social forms that apply in (at least) an educational context. We reckon that they fall fairly neatly into physically overlapping but conceptually distinct categories of groups, nets and sets. In the past, we used the term 'collectives' instead of 'sets' but we have come to realise that collectives are something else entirely.This post starts with an overview of the distinctions and then drifts into vaguer territory in an attempt to uncover what it might be like for something to have meaning for a social entity. That's a rather bizarre concept at first glance: is there any sense at all in which a collection of people, not the people within that collection but the collection itself, can feel or think anything and, if not, how can anything be said to have meaning to it? And yet, oddly, we do ascribe human attributes to collections of people all the time in our everyday speech - 'Apple is a creative company', 'Canada got another gold medal', 'We came top of the league', 'the crowd is angry', 'this is the most enthusiastic class I've ever taught', 'Google beat Oracle in the court case', 'Athabasca University is committed to open learning' and so on. While this is often just a shorthand notation for something else or a poetic metaphor, the ubiquity of such language makes it worth examining further.

Groups, nets, sets and collectives 

Groups are the stuff of conventional teaching and learning: they are distinct and intentional entities that people join and know that they are members. You are in a group or out of it: you might be more or less engaged, but there is no real in-between state. Groups are generally characterised by things like purposes, collaboration, hierarchies, roles, exclusion. We know a lot about groups and their effects on learning, and the whole field of social constructivist models of teaching and learning is based on them.

Networks are more tenuous entities. To join a network you connect with one or more of its nodes. You might intentionally wish to make connections with particular people or kinds of people, but a network has no formal constitution, no innate roles and hierarchies, no innate exclusion: it's about individuals and their connections with one another. It is composed of nothing but connections and ties and has no formal boundaries. Networks are traversable and offer ways of linking and connecting to others and their knowledge. Learning in networks tends to be informal, connected and undirected by any individual. Networks are great for on-demand and serendipitous learning, combining social ties with unbounded knowledge.

Sets are about categories and topics. Set-based learning is about finding people and knowledge based on shared characteristics, typically a topic about one wishes to learn. Wikipedia, YouTube, and Google Search epitomise the nature and value of sets in learning, with ascending social interest sites like Pinterest or Quora beginning to enter the fray. However, libraries and bookshops are also primarily set-oriented, so this is nothing new. Unlike networks, there may be no direct connection with others and certainly no expectation of sustained interaction (though it may occur and develop into other social forms). Unlike groups, there is no formal constitution of a collection of individuals. It is just a bunch of people joined (in a set-theory sense) by a shared interest.

When social forms act together as a single entity, they become collectives - not a social form, as such, but the result of social forms and the interactions of individuals within them. A collective may be the result of direct or indirect interactions of individual autonomous agents, such as may be found in natural social forms like ant or termite nests, herds, flocks or shoals or, in human systems, in the operations of money markets, mobs, stock exchanges, group-think and forest path formation. The 'invisible hand' is a collective in action, the result of myriad local interactions rather than a deliberate global plan. The environment plays a strong role in this: things like the availability of resources, sight-lines, weather patterns, topology and more play a role in determining how such dynamics play out.

In computer-based systems, the combination that leads to a collective is not just a result of the emergent results of individual agents but may be effected and consequently notably affected by a machine: Amazon recommendations, Google Search, PayPal reputations and so on are all combining intelligent and independent actions of humans using algorithms in a machine in order to affect human action. The computer system extends what is possible through direct/indirect interaction alone, but it is still powered by individual intelligent beings making intelligent choices. It leads to a cyborg entity where collective emergence is part-human, part-machine. This makes such systems very powerful and flexible as a means to create collective intelligence that is directed to some end, rather than being simply an emergent feature of a complex system that happens to have value. Not only does the environment itself play a role in shaping behaviour, as in 'natural' systems, but it actually creates some of the rules of interaction. In effect, it bends and sometimes creates the rules of social physics.

Values in collections of people

In some sense, groups, sets, nets are all identifiable entities in the world that achieve some kind of action or purpose that is distinct from the individual actions or purposes of the people of which they are comprised. Clay Shirky talked of them as first class objects - things in themselves. But are these entities, these first class objects, anything like people? Are there values we can ascribe to them? Do they have intentions and purposes that are analogous to those of individuals? Do they have attitudes that are separable or different from the attitudes of those that comprise them? This is a problem that my student Eric Von Stackelberg has been exploring in his masters thesis and he has made some very interesting progress on this by using categories, that are used in psychology to describe individual values, as a means of describing group values ('group' used here in the generic sense of a collection of people of some identifiable sort). I've been challenging him to clarify what it would mean for that to be true. Can a bunch of people (not the individuals, the bunch itself) be kind, or hedonistic, or happy, or avaricious, or whatever in a manner that is meaningfully different from saying that the individuals themselves, or even a majority of them, have those attitudes? It seems that a corollary of that implies we might ascribe to them something akin to emotion. Could a bunch of people (the bunch, not the people in the bunch) feel happiness, amusement, tiredness, anger, pain, hate or love? I find this a difficult concept to get my head around. And yet...

It seems intuitively obvious that there is something organism-like in a social cluster. It is certainly normal to speak of organizational values, national values, group beliefs, group norms and so on. Athabasca University, for example treats itself as a unified entity in its mission statement that talks of values, purposes and intentions as though it were (almost) a human being. Corporations are treated in the law of some countries almost exactly like people (albeit odd ones, given that all would be diagnosed as having, on analysis, serious psychopathic disorders). Nations are very similar - we can talk of America invading Afghanistan without batting an eyelid, even though it is very clearly not something that is literally or physically the case in the way it would be were, say, a bully to pick on someone in a playground. A similar but far more worrisome phrase like 'the French have always despised the English' sounds like it plays on a similar notion but suggests something rather different. When we say that a country has invaded another we are talking about a group activity, something organized and intentional, whereas when we suggest that a whole population of people thinks in a certain way we are talking about a set: people with the shared attribute of nationality (the same applies to race, or gender, or physical attribute, etc - that way bigotry lies). There are interesting hybrids: it is normal to say 'we won' when a hockey team wins even though 'we' had negligible input or nothing to do with it at all. We identify at a set level (we, the supporters of the team) in a manner that encompasses the team (a distinct group). It is harder to find examples of networks being treated in quite the same way, though the flow of memes that is so easily facilitated through social networking sites may be an example of values of a sort being a feature of networks. However, the innately diffuse nature of a network means it is significantly less likely to have values of its own. It may be predicated on individuals' values (e.g a network of religious believers) but a network itself does not seem to have any, at least at first glance. Networks are primarily about individuals and their connections to other individuals, each seeing their part of the network from their own unique perspective. This is not promising territory to find anything apart from emergent patterns of value.

There are natural parallels though, that suggest an alternative view. It makes no sense to think of an ant colony as just a load of autonomous ants - the colony itself is undoubtedly a super-organism and an ant from such a colony is, on its own, not a meaningful entity: it is constituted only in its relation to others, as part of a single network. We can use telological language about the colony, and even ascribe to it wants, desires and intentions. It is also absolutely reasonable to think of an organism like a human being as a group/network/set of tightly coupled cells that are behaving, together, as a single unified entity that is not dissimilar to an ant colony in its complexity and interdependence. An individual cell may live on its own, but its meaning only becomes apparent in the presence of others. Even at a cellular level, our cells are a community of different symbiotic organisms. The vast majority of the cells in our bodies don't even have human DNA (that still staggers me - what are we?) but we still cannot think of ourselves as anything other than individuals that have values, intentions, meaning and - well - an autonomous life of their own. Are social forms so very different? It seems that at least one contained network that constitutes an entity may well have values because, well, we have values and we can be viewed as networks. In fact, we can also be thought of as sets and, in some senses, as groups.

While chatting about this kind of thing, a friend recently remarked that perhaps the most crucial value that we can ascribe to an individual is the value of survival: the will to survive. An arbitrary collection of entities does not have this. If we are thinking in terms of organisms, then I guess we might more properly think of it in evolutionary terms as a bunch of genes seeking to survive, but that's a layer of abstraction higher than needed here.

At the individual organism level it is the organism that tries to survive. This is one obvious reason that it is logical to think of an ant, termite or bee colony as a single organism: individuals will readily sacrifice themselves for the colony exactly as the cells in our own bodies continuously sacrifice themselves in order to protect and sustain the entity that we recognise as a person. We can easily see this survival imperative in intentionally created groups, from small departments to sewing circles, from gangs and teams to companies to countries (groups). If a group exists, it will typically try to preserve itself, and individual members may often be seen as expendable in meeting that need: thing of countries at war, political parties, hockey teams and so on. We can also see it in less rigidly defined entities such as cultures (sets/nets) and institutions (sets/groups). Even though individuals may have no formal connections with one another with, at most, tenuous networks and no unifying constitution, the simple fact of observable similarities and shared features leads to a self-reinforcing crowd effect that leads to survival. Often, intentional groups will be formed to support these but the interesting thing is that they are not groups defending their own 'lives' but a kind of collective antibody formed to protect the broader, sometimes barely tangible, set. People who form organizations to defend society against some challenge to what they see as being its central cultural, aesthetic, ethical or social values are doing just that. The set of which they feel a part is somehow greater than the group that they form to protect it.

It is harder to see this in human networks. Although there do appear to be emergent and dynamically stable features in many networks, that's just it: they are emergent features like a solonic wave in a river, the rhythmic dripping of a tap, or a whorl of clouds in a storm. It makes far less sense to talk of a cloud formation as trying to survive than it does of an ant colony. We do, however, see moods and trends spread through networks - if you know people who are getting fatter then you are far more likely to become fat yourself, for instance, and depression is contagious. It is reasonable to surmise that values spread in much the same way: indeed, if we look at extremes such as the spread of Naziism or the growth of fundamental religions, there is a very strong sense in which networks act as conduits for value. But I think that's it: they are conduits, not containers of value. Whatever has values may consist of networks that facilitate the spread or even the formation of those values, but it is the thing, not the network, that is what we care about here.

All of this leads me to suspect that the social forms that Terry and I identified as different in their pedagogical uses and affordances have some fundamental characteristics that go quite a way beyond that and relate to and intersect with one another in quite distinctively different ways. When we picture them as a Venn diagram it homogenizes these differences and makes it seem as though there are simply overlaps between vaguely similar entities, but there is more to it. Networks provide conduits for the spread of value between and within sets and groups. They are not the only conduits by any means: for example, if the human race were attacked by an alien civilization then I think it unlikely that a network would be needed to spread a range of values that would surface fairly ubiquitously (as a set characteristic), though it might help spread attitudes to how we should respond to such a threat. The same is true of many things in the more mundane realms of broadcast media, city planning and publication, not to mention the effects of natural features of the environment. Part of the reason for the distinctive culture and values in Canada, for example, is surely related to its dangerously cold climate that makes assistance to and from others a very strong necessity, plus a million other things like the opportunities afforded by its abundant natural resources and its proximity to other places. Prairie people are not quite the same as mountain people for reasons that go beyond historical happenstance and path dependencies. This is all about sets: shared characteristics and features. Sets can help to generate values: the fact that shorter-than-average people have to interact differently with the environment than taller-than-average people in many different ways leads to (at least) greater tendencies to share some values. The fact that people are collocated in a region, quite apart from network and group facets that emerge, means they are likely to share some attitudes and tendencies. It's simple evolutionary theory. It's why the finches in the Galapagos Islands have evolved differently: they have to interact with their different environments, and those environments have varied constraints and affordances. Other factors like path dependencies play an enormous role. Networks have a crucial part here too as co-evolution occurs not only in response to the environment but in response to the interconnections between agents in the system. In human systems, groups are both containers of networks and are themselves nodes in networks, so there are layers of scale that make this quite a complex thing.

The complexity becomes much more manageable if, instead of focusing on the social forms of aggregation, we think of values as being attached not to the aggregations themselves but to the collectives that emerge from them. Collectives are, by definition, behaviours that emerge from multiple interactions and are different from those interactions. A human can be viewed as a net, a set or even a group (there are hierarchies of organisation in which the brain might be seen as a controller) but it is the collective, the emergent entity that arises out of sets, nets and groups that is recognisably an individual, that has values. In the development of nationalist or religious values, it is the operation of algorithms that makes the set, net or group of which it is comprised into something distinct and potentially able to embody values, typically resulting from a mix of interactions combined with intentional categorisation by individuals - a collective.

I don't see any of this as suggesting even a glimmer of consciousness but it does seem at least possible that collectives can, at least sometimes, be described as having tropisms and to talk, perhaps loosely, in terms of intentionality. Whether this is enough to ascribe values to them is another matter, but it is not entirely absurd. We sometimes talk of plants as 'liking the sun' or 'liking the shade' in ways that probably have more to do with metaphor than beliefs about plant feelings, but there is a sense that it is true. It is even more obviously true in animals: even single-celled organisms are slightly more than just billiard balls bouncing round in reaction to their surroundings. They have purposes, aversions, likes and dislikes. Some exhibit fascinatingly complex behaviours - slime moulds, for example. It is not a great stretch from there to talking about human collectives in similar terms. Financial markets, for instance, are archetypal examples of human collectives that in principle need little or no machine mediation, yet move in complex ways that are not simply the sum of their parts. And, interestingly, we talk blithely of bull and bear markets as though they were in some way alive and, in some sense, imbued with feelings and even emotions. And maybe, in some sense, they are. 

Downes on Connectivism and Connective Knowledge

Connectivism blog (George Siemens) - May 21, 2012 - 09:08

Stephen Downes is a prolific writer. If you follow his work at OLDaily or on Half an Hour, you’re well aware of this. He covers an extremely broad territory: technology, learning, society, politics (sometimes a bit veiled, but generally not far below the surface), and philosophy.

Late last week, he posted an ebook on Connectivism and Connective Knowledge: Essays on meaning and learning networks (.pdf). It weighs in at an impressive 600+ pages. The work is basically a curation of his writings and presentations over the past decade. From the introduction:

Learning is the creation and removal of connections between the entities, or the adjustment of the strengths of those connections. A learning theory is, literally, a theory describing how these connections are created or adjusted. In this book I describe four major mechanisms: similarity, contiguity, feedback, and harmony. There may be other mechanisms, these and others may work together, and the precise mechanism for any given person may be irreducibly complex.

Stephen doesn’t make any apologies for the length of the ebook in stating that a formally structured book “would be sterile, however, and it [the ebook he has posted] feels more true to the actual enquiry to stay true to the original blog posts, essays and presentations that constitute this work”

I personally would like to see Stephen produce a succinct text. Until he does so, students (and others) have a valuable resource in tracking and citing his work in networks, MOOCs, meaning, groups & networks, semantics, and more. Simply being able to point to and cite a particular page will be helpful for students…Thanks Stephen!

Change MOOC: Sensemaking and Analytics

Connectivism blog (George Siemens) - April 24, 2012 - 03:50

The Change MOOC has been running since September of 2011. We’ve had the pleasure, in the past 30+ weeks, of many outstanding discussions. The archives of activity/readings/weeks are available on the main MOOc site.

Each week, different facilitators share readings and resources that they deem to be most reflective of their work and their passion.

My week is on sensmaking and analytics.

At first glance, sensemaking and analytics seem antagonistic. Sensemaking involves social processes…whereas analytics are algorithmically-driven. MOOCs are distributed systems of interaction and content. The traditional approach to courses – pre-packaged before learners arrive – is upended in a MOOC. The hyper-fragmentation of content and interaction presents problems for educators and learners: How do we make sense of what’s happening? How do we develop a coherent view of the many, many topics that comprise a MOOC? How do we re-create a centre that shares the bounding elements of a course, but is based on the networked centre-less structure of the internet?

Sensemaking

Sensemaking is an activity that individuals engage in daily in response to uncertainty, complex topics, or in changing settings. Much like with the earlier discussion of the term “information”, sensemaking is a term in common use but with limited agreement on what it precisely means. Researchers argue that “[n]o single, unambiguous answer can be given…for sense-making theory has several meanings depending on the disciplinary or paradigmatic position of the speaker” (Kari 1998: 1).

In contrast to decision-making models in crisis situations, Weick, Sutcliffe, and Obstfeld (2005: 415) promote a narrative model of sensemaking. They argue that sensemaking is “not about truth and getting it right. Instead it is about continued redrafting of an emerging story so that it becomes comprehensible.” Weick’s sensemaking model emphasizes non-linearity, and pattern recognition. The importance of pattern recognition is consequential in that it integrates the expertise of individuals with narratives of coherence. Sensemaking is an effort “to create order and make retrospective sense of what occurs” (Weick 1993: 635).

Nowhere is the emphasis on dialogue more precise than in the work of Brenda Dervin (2003). The Dervin Sense-Making Methodology, dating back to the early 1970s, “is proposed as an alternative to approaches based on traditional transmission models of communication” (Dervin 2003: 6). Dervin (2003: 238) uses the metaphors of “situation” “outcomes”, and “gaps”, “moving across time and space, facing a gap, building a bridge across the gap, and then constructing and evaluating the uses of the bridge.”

Sensemaking and the process of learning are related, but each has distinct constructs (Schwandt 2005). Learning emphasizes time for consideration, reflection, and integration, whereas sensemaking is “swift and hasty as opposed to reflective” (Schwandt 2005: 189). In sensemaking, individuals understand a problem that “they face only after they have faced it and only after their actions have become inextricably wound into it” (Weick 1988: 306). In contrast, formal learning often occurs within a construct of increasing the capacity of an individual to act, instead of situation-specific sensemaking activities.

With the breadth of the topic of sensemaking, and its intuitive feel and common use, it is unsurprising that numerous definitions exist. A sampling of definitions include:

- “Sensemaking is finding a representation that organizes information to reduce the cost of an operation in an information task” (Russell et al. 1993: 272).
- “[S]ensemaking is a motivated, continuous effort to understand connections . . . in order to anticipate their trajectories and act effectively” (Klein et al. 2006: 71).
- “Sensemaking is about labeling and categorizing to stabilize the streaming of experience” (Weick et al. 2005: 411) and differs from decision making in its focus on “contextual rationality” (Weick 1993: 636).
- Sensemaking involves individual’s attempting to “negotiate strangeness” (Weick 1993: 645). Failures in these settings occurs when “[f]rameworks and meanings [destroy] rather than [construct] one another” (Weick 1993: 645).

Sensemaking, then, is essentially the creation of an architecture of concept relatedness, such as placing “items into frameworks” (Weick 1995:6) and continually seeking “to understand connections” (Klein et al. 2006: 71). Sensemaking occurs in many facets of personal and organizational life, including crisis situations, routine information seeking, research, and learning. Individuals engage in nebulous problem solving without a clear path daily: a parent raising a child, an employee starting a new job, a doctor without a clear diagnosis for a patient, a master’s research student, and so on.

Analytics

My interest in analytics is driven by my views on learning as a connection-making process. Through analytics we are able to trace connections, understand how they are formed, the nature of exchanges between people, and the impact of those connections. The data-trails that are created in our daily interactions online and with others form the basis of analytics in learning. The field, however, is still developing and new approaches to analysis, algorithms, and tools are quickly emerging.

Readings for the week:

- Howard Rheingold Interview w/ (me)
- Learning analytics as a research and practitioner domain

Slideshare presentation:

Eli 2012 Sensemaking Analytics

Sources:

Dervin, B. (2003) ed. by Foreman-Wernet, L., & Lauterbach, E. Sense-making methodology reader: selected writings of Brenda Dervin. New York: Hampton Press

Kari, J. (1998) Making sense of sense-making: from metatheory to substantive theory in the context of paranormal information seeking. Paper presented at Nordis-Net workshop (Meta)theoretical stands in studying library and information institutions: individual, organizational and societal aspects, November 12-15 1998, Oslo, Norway

Klein, G., Moon, B., and Hoffman, R. R. (2006) ‘Making sense of sensemaking 1: Alternative perspectives.’ IEEE Intelligent Systems 21, (4) 70–73. doi:10.1109/MIS.2006.75

Russell, D. M., Stefik, M. J., Pirolli, P., and Card, S. K. (1993) ‘The cost structure of sensemaking.’ In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: Association for Computer Machinery: 269−276. doi:10.1145/169059.169209

Schwandt, D. R. (2005) ‘When managers become philosophers: Integrating learning with sensemaking.’ Academy of Management Learning & Education [online] 4, (2) 176–192. Available from

Weick, K. E. (1988) ‘Enacted sensemaking in crisis situations.’ Journal of Management Studies [online] 25, (4) 305-317. Available from

Weick, K. E. (1993) ‘The collapse of sensemaking in organizations: The Mann Gulch disaster.’ Administrative Science Quarterly 38, (4) 628-652

Weick, K. E., Sutcliffe, K. M., and Obstfeld, D. (2005) ‘Organizing and the process of sensemaking.’ Organization Science 16, (4) 409-421

December 31, 1969 - 16:00