Skip To Content

Technology Enhanced Knowledge Research Institute (TEKRI)

TEKRI blogs

Why so many questions?

Jon Dron's blog - May 28, 2015 - 19:59

At Athabasca University, our proposed multi-million dollar investment in a student relationship management system, dubbed the 'Student Success Centre' (SSC), is causing quite a flood of discussion and debate among faculty and tutors at the moment. Though I do see some opportunities in this if (and only if) it is very intelligently and sensitively designed, there are massive and potentially fatal dangers in creating such a thing.  See a previous post of mine for some of my worries. I have many thoughts on the matter, but one thing strikes me as interesting enough to share more widely and, though it has a lot to do with the SSC, it also has broader implications.

Part of the justification for the SSC is that an alleged 80% of current interactions with students are about administrative rather than academic issues. I say 'alleged' because such things are notoriously hard to measure with any accuracy. But let's assume that it actually is accurate.

How weird is that?

Why is it that our students (apparently) need to contact us for admin support in overwhelming numbers but actually hardly talk at all about the complicated subjects they are taking? Assuming that these 80% of interactions are not mostly to complain about things that have gone wrong (if so, an SSC is not the answer!) then it seems, on the face of it, more than a bit topsy turvy. One reasonable explanation might be that our course materials are so utterly brilliant that they require little further interaction, but I am not convinced that this sufficiently explains the disparity. Students are mostly spending 100+ hours on academic work for each course whereas (I hope) at most a couple of hours are spent on administrivia. No matter how amazing our courses might be, the difference is remarkable. It is doubly remarkable when you consider that a fair number of our courses do involve at least some required level of interaction which, alone, should easily account for most if not more than all of that remaining 20%. In my own courses it is a lot more than that and I am aware of many others with very active Landing groups, Moodle forums, webinar sessions, and even the occasional visit to an immersive world. It is also possible that our administrative processes are extremely opaque and ill-explained. This certainly accords with my own experience of trying to work out something as simple as how much a course would cost or the process needed to submit project work. But, if that is the case, and assuming our distance, human-free teaching works as well as we believe it does, then why can we not a) simplify the processes and b) provide equally high quality learning materials for our admin processes so that students don’t need to bother our admin staff so much? If our course materials are so great then that would seem, on the face of it, very much more cost-effective than spending millions on a system that is at least as likely to have a negative as a positive impact and that actually increases our ongoing costs considerably. It is also quite within the capabilities of our existing skillset. Even so, it seems very odd to me that students can come to terms with inordinately complex subjects from philosophy to biochemistry, but that they are foiled by a simple bit of bureaucracy and need to seek human assistance. It may be hard, but it is not beyond the means of a motivated learner to discover, especially given that we are specialists in producing high quality learning materials that should make such things very clear. And in motivation, I think, lies the key.  

Other people matter

Other people are wonderful things when you need to learn something, pretty much across the board. Above all they matter when there is no obvious reason that you should be interested or care about it for its own merits, and bureaucratic procedures are seldom very interesting. I have known only one person in my whole life that actually likes filling in forms (I think it is a meditative pursuit - my father felt much the same way about dishwashing and log sawing) but, for the most part, this is not a thing that excites most people.   I hypothesize that our students tend to need less academic than bureaucratic help at least partly because, by and large, for the coursework they are very self-motivated people learning things that interest them whereas our bureaucracy is at most a means to an end, at worst a demotivating barrier. It would not help much to provide great teaching materials for bureaucratic procedures because 99% of students would have no intrinsic interest in learning about them, and it would have zero value to them in any future activity. Why would they bother? It is far easier to ask someone.  Our students actually like the challenge of facing and solving problems in their chosen subjects - in fact, that's one of the great joys of learning. They don't turn to tutors to discuss things because there are plenty of other ways of getting the help they need, both in course materials and elsewhere, and it is fun to overcome obstacles. The more successful ones tend to have supportive friends, families or colleagues, or are otherwise very single-minded. They tend to know why they are doing what they are doing. We don't get many students that are not like this, at least on our self-paced courses, because either they don't bother coming in the first place or they are among the scarily large percentage that drop out before starting (we don't count them in our stats though, in fairness, neither to face-to-face universities). But, of course, that only applies for students that do really like the process of learning and most of what they are learning, that know how to do it and/or that have existing support networks. It does not apply to those that hit very difficult or boring spots, that give up before they start, that hit busy times that mean they cannot devote the energy to the work, that need a helping hand with the process but cannot find it elsewhere, or that don't bother even looking at a distance option at all because they do not like the isolation it (apparently) entails. For those students, other people can help a lot. Even for our own students, over half (when asked) claim that they would appreciate more human interaction. And those are the ones that have knowingly self-selected a largely isolated process and that have not already dropped out.  Perhaps more worryingly, it raises concerns about the quality of the learning experience. Doing things alone means that you miss out on all the benefits of a supportive learning community. You don't get to argue, to explain, to question, save in your own head or in formal, largely one-way, assignments. You don't get multiple perspectives, different ways of seeing, opportunities to challenge and be challenged. You don't get the motivation of writing for an audience of people that you care about. You don't get people that care about you and the learning community providing support when times are hard, nor the pleasure of helping when others are in difficulty. You don't get to compare yourself with others, the chance to reflect on how you differ and whether that is a good or bad thing. You don't get to model behaviours or see those behaviours being modelled. These are just some of the notable benefits of traditional university systems that are relatively hard to come by in Athabasca's traditional self-paced model (not in all courses, but in many). It's not at all about asking questions and getting solutions. It's about engaging in a knowledge creation process with other people. There are distinct benefits of being alone, notably in the high degree of control it brings, but a bit of interaction goes a long long way. It takes a very special kind of person to get by without that and the vast majority of our successful students (at least in undergraduate self-paced courses) are exactly that special kind of person.  If it is true that only 20% of interactions are currently concerned with academic issues, that is a big reason for concern, because it means our students are missing out on an incredibly rich set of opportunities in which they can help one another as well as interact with tutors. Creating an SSC system that supports what is therefore, for those that are not happy alone (i.e. the ones we lose or never get in the first place), an impoverished experience, seems simply to ossify a process that should at least be questioned. It is not a solution to the problem - it is an exacerbation of it, further entrenching a set of approaches and methods that are inadequate for most students (the ones we don't get or keep) in the first place.

A sustainable future?

As a university seeking sustainability we could simply continue to concentrate on addressing the needs of self-motivated, solitary students that will succeed almost no matter what we do to them, and just make the processing more cost-efficient with the SSC.  If we have enough of those students, then we will thrive for some time to come, though I can’t say it fits well with our open mission and I worry greatly about those we fail to help. If we want to get more of those self-guided students then there are lots of other things we should probably do too like dropping the whole notion of fixed-length courses (smaller chunks means the chances of hitting the motivation sweet-spot are higher) and disaggregating assessment from learning (because extrinsic motivation kills intrinsic motivation). But, if we are sticking with the idea of traditional courses, the trouble is that we are no longer almost alone in offering such things and there is a finite market of self-motivated, truly independent learners who (if they have any sense) will find cheaper alternatives that offer the same or greater value. If all we are offering is the opportunity to learn independently and a bit of credible certification at the end of it, we will wind up competing on price with institutions and businesses that have deeper coffers, cheaper staff, and less constraints. In a cut-throat price war with better funded peers, we are doomed. If we are to be successful in the future then we need to make more of the human side of our teaching, not less, and that means creating richer, more direct channels to other people in this learning community, not automating methods that are designed for the era of correspondence learning. This is something that, not uncoincidentally, the Landing is supposed to help with, though it is just an exemplar and at most a piece of the puzzle - we ideally want connection to be far more deeply embedded everywhere rather than in a separate site. It is also something that current pilot implementations of the SSC are antagonistic towards, thanks mainly to equating time and effort, focusing on solving specific problems rather than human connection, failing to support technological diversity, and standing as an obstacle between people that just need to talk. It doesn't have to be built that way. It could almost as easily vanish into the background, be seamlessly hooked into our social environments like email, Moodle and the Landing, could be an admin tool that gives support when needed but disappears when not. And there is no reason whatsoever that it needs to be used to pay tutors by the recorded minute, a bad idea that has been slung on the back of it that has no place in our culture. Though not what the pilot systems do at all, a well-designed system like this could step in or be called upon when needed, could support analytics that would be genuinely helpful, could improve management information, all without getting in the way of interaction at all. In fact, it could easily be used to enhance it, because it could make patterns of dialogue more visible and comprehensible. 

In conclusion

At Athabasca we have some of the greatest distance educators and researchers on the planet, and that greatness rubs off on those around them. As a learning community, knowledge spreads among us and we are all elevated by it. We talk about such things in person, in meetings, via Skype, in webinars, on mailing lists, on the Landing, in pubs, in cafes, etc. And, as a result, ideas, methods and values get created, transformed and flow through our network. This makes us quite unique - as all learning communities are unique - and creates the distinctive culture and values of our university that no other university can replicate. Even when people leave, they leave traces of their ideas and values in those that remain, that get passed along for long after they have gone, become part of the rich cultural identity that defines us. It's not mainly about our structures, processes and procedures: except when they support greater interaction, those actually get in the way much of the time. It's about a culture and community of learning. It's about the knowledge that flows in and through this shifting but identifiable crowd. This is a large part of what gives us our identity. It's exactly the same kind of thing that means we can talk about (say) the Vancouver Canucks or Apple Inc as a meaningful persistent entity, even though not one of the people in the organization is the same as when it began and virtually all of its processes, locations, strategies and goals beyond the most basic have changed, likely many times. The thing is, if we hide those people behind machines and processes, separate them through opaque hierarchies, reduce the tools and opportunities for them to connect, we lose almost all of the value. The face of the organization becomes essentially the face of the designer of the machine or the process and the people are simply cogs implementing it. That's not a good way forward, especially as there are likely quite a few better machine and process designers out there. Our people - staff and students - are the gold we need to mine, and they are also the reason we are worth saving. We need to be a university that takes the distance out of distance learning, that connects, inspires, supports and nurtures both its staff and its students. Only then will we justly be able to claim to have a success centre.

 

Digital Learning Research Network Conference

elearnspace (George Siemens) - May 21, 2015 - 06:37

I’ve been working with several colleagues on arranging the upcoming Digital Learning Research Network (dLRN) conference at Stanford, October 16-17, 2015. The call for papers is now open. We are looking for short abstracts – 250 words – on topics of digital learning. The deadline is May 31. Our interest is to raise the nuance and calibre of the discussion about education in a digital era; one where hype and over-promising the power of technology has replaced structured interrogation of the meaning of changes that we are experiencing. We have a great lineup of speakers confirmed and are expanding the list rapidly. The conference will include social scientists, activists, philosophers, researchers, and rabble rousers. It will be an intentionally eclectic mix of people, institutions, and ideas as we explore the nodes that are weaving the network of education’s future. Representation from the following research organizations has already confirmed from: Stanford, Smithsonian, University of Michigan, University of Edinburgh, Columbia University, CMU, state systems (Georgia, California, Texas, and Arkansas), and SRI.

Join us for what will be a small (max 150 people) and exciting exploration of a) what education is becoming, b) who we (as learners, activists, and academics) are, and c) where these two intersect in forming the type of learning system that will enable us to create the type of society that we want for future generations.

For a more thoughtful analysis of the conference and our call for submissions, see Bonnie Stewart, Kate Bowles, and Kristen Eshleman

From the call:

Learning introduces students to practices of sensemaking, wayfinding, and managing uncertainty. Higher education institutions confront the same experiences as they navigate changing contexts for the delivery of services. Digital technologies and networks have created a new sense of scale and opportunity within global higher education, while fostering new partnerships focused on digital innovation as a source of sustainability in volatile circumstances. At the same time, these opportunities have introduced risks in relation to the ethics of experimentation and exploitation, emphasizing disruption and novelty and failing to recognise universities’ long-standing investment in educational research and development.

Scientists: Earth Endangered by New Strain of Fact-Resistant Humans

Jon Dron's bookmarks - May 13, 2015 - 09:28

"The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them."

Marvellous.

Address of the bookmark: http://www.newyorker.com/humor/borowitz-report/scientists-earth-endangered-by-new-strain-of-fact-resistant-humans

Retirement

Terry Anderson's blog - May 3, 2015 - 23:44
This month I turn 65 and of course had to try out the Howoldbot to confirm it. Much to my amazement, it got my age correct (minus 10 days). Well, the picture was taken a couple of years ago, so I guess I am an early maturer!   Reaching this milestone has triggered my long standing […]

BusinessTown

Jon Dron's bookmarks - May 3, 2015 - 11:01

Richard Scarry meets silicon valley. Wonderful and true.

Address of the bookmark: http://welcometobusinesstown.tumblr.com/

The Linearity of Stephen Downes. Or a tale of two Stephens

elearnspace (George Siemens) - May 3, 2015 - 10:22

Stephen Downes responds to my previous post: “I said, “the absence of a background in the field is glaring and obvious.” In this I refer not only to specific arguments advanced in the study, which to me seem empty and obvious, but also the focus and methodology, which seem to me to be hopelessly naïve.”

Stephen makes the following points:
1. George has recanted his previous work and is now playing the academic game
2. Research as is done in the academy today is poor
3. Our paper is bad.

Firstly, before I respond to three points, I want to foreground an interesting aspect of Stephen’s dialogue in this post. I’m going to call it “academic pick-up artist” strategy (i.e. tactics to distract from the real point of engagement or to bring your target into some type of state of emotional response). I first encountered this approach by the talented Catherine Fitzpatrick (Prokofy Neva) during CCK08. Here’s how it works: employ strategies that are intended to elicit an emotional response but don’t quite cross over into ad hominen attacks. The language is at times dismissive, humorous, and aggressive. In Stephen’s case, he uses terms such as: hopelessly naïve, recant his previous work, a load of crap, a shell game, a con game, trivial, muddled mess, nonsense. These flamboyant terms have an emotional impact that is not about the research and don’t advance the conversation toward resolution or even shared understanding. I’ll try to avoid responding in a similar spirit, but I’ll admit that it is not an easy temptation to resist.

Secondly, Stephen makes some statements about me personally. He is complimentary in his assessment of me as a person. I have known Stephen since he did a keynote in Regina in 2001. I’ve followed his work since and have greatly valued his contributions to our field and his directness. I count him as a friend and close collaborator. I enjoy differences of opinion and genuinely appreciate and learn from his criticism. (do a “George Siemens” search on OLDaily – he has provided many learning opportunities for me).

Stephen says a few things about my motivations that require some clarification, specifically that I am trying to make an academic name for myself and that I am recanting previous work. I honestly don’t care about making an academic name for myself. I am motivated by doing interesting things that have an impact on access to learning and quality of learning for all members of society. I am a first in family degree completer – as an immigrant and from low socio-economic status. There are barriers that exist for individuals in this position: psychologically, emotionally, and economically. Higher education provides a critical opportunity for people to move between the economic-social strata of society. When access is denied, society becomes less equitable and hope dims. My interest in preparing for digital universities is to ensure that opportunities exist, equity is fostered, and that democratic and engaged citizenry are fostered. The corporatization of higher education is to be resisted as values of “profit making” are at often in conflict with values of “equity and fairness”. I want my children to inherit a world that is more fair and more just than what my generation experienced.

I will return later to Stephen’s assertion that I am recanting previous work.

1. George has recanted his previous work and is now playing the academic game

With academic pickup artistry and my motivations foregrounded, I’ll turn to Stephen’s assertions.

It has in recent years been engaged in a sustained attack on the very idea of the MOOC and alternative forms of learning not dependent on the traditional model of the professor, the classroom, and the academic degree. It is resisting, for good reason, incursions from the commercial sector into its space, but as a consequence, clinging to antiquated models and approaches to research.

This get at the heart of views that Stephen and I have discussed on numerous occasions. I believe in the value of the professoriate. In this instance, he is Illich to my Friere. As I interpret Stephen’s work, he would like to see all learning opportunities and control shift to the individual and sees limited value in the higher education system that is as much about preserving faculty positions as it is about preserving the academy. Stephen and I both resist commercialization of education but vary in how we want to see the university of the future. Stephen wants a university model without universities. This comes, I believe, from his unfortunate experiences in doing his phd where his supervisory panel played a hard heavy hand in determining what is and isn’t research that they valued. I’m sure his experience isn’t unique.

Faculty can be stunning idiots when it comes to preserving and perpetuating their egos. The pursuit of knowledge and advocacy for equity often takes a seat to ego and the goal building a faculty “mini me” who is expected to pick up a research stream done by a panel or department and toe the line. In contrast to Stephen’s views, I love universities. I want a future of more, not less, universities. Universities are not perfect, but they are the best model that we currently have to enable individuals to improve their position in life and a power structure that exists to counter and comment on the corporate and government power structures. Can these goals be realized by networks of individuals (i.e. the second superpower)? If the world was populated with primarily Stephens, then it might be possible. For many people, however, education is not a goal in itself, but rather a means to employment. Systems are needed to preserve and perpetuate the highest ideals of society. If left to chance, then the views of the most aggressive will become the norm. While society slept, many of the wealthiest were busy creating a tax system that preserved their resources and created inequity. In the past, unions existed to serve as an organizing structure to advocate for the rights of individual works. Stephen would argue that we could today do this organizing and democracy preserving work through networks. I agree that networks are important, but argue that institutions are a type of network that has been configured to better meet these needs. Some structure is needed. Perhaps not as much as we see today in universities, but a minimum level or organization is required in order to provide learning opportunities to society’s disenfranchised. Simply giving people access is not enough. Social, scaffolded, and structured support is needed.

Perhaps as a result, part of what Siemens has had to do in order to adapt to that world has been to recant his previous work… This recantation saddens me for a variety of reasons. For one this, we – Siemens and myself and others who were involved in the development of the MOOC – made no such statements. In the years between 2008, when the MOOC was created, and 2011, when the first MOOC emerged from a major U.S. university, the focus was on innovation and experimentation in a cautious though typically exuberant attitude.

I haven’t recanted my previous work. Stephen displays a linearity of thought, of cause/effect, that confuses me. I see the world in networked structures. Learning is about network making at neuronal, conceptual, and external levels. Knowledge is networked. The history of ideas is networked. I don’t see a “one or the other” approach to research, to corporate involvement in education, or to learning in general. Instead, I see 3D lattice-like network structure that have multiple dimensions and connections between those dimensions.

Siemens has moved over to that camp, now working with EdX rather than the connectivist model we started with… Again, these rash and foolish statements [from Agarwal] are coming from a respected university professor, a scion of the academy, part of this system Siemens is now attempting to join.

I disagree with this statement, largely because I have privileged access to my own thinking. In this instance, and at least one prior when I did a talk at Online Educa many years ago and he stated that I had become fully corporate, Stephen is putting me in a box. Nobody puts George in a box! I am part of the academy in terms of employment. I am part of the academy by nature of grant writing and research. I am part of the academy in terms of publishing with my peers. But I am not only a one-dimensional entity. I did not take a traditional academic route. My publication history is not typical. Many of my citations come from open public works rather than traditional publications. To say that I have recanted prior work is simply not true. I am bringing my previous work into a different context – one that allows for networks and university structures to exist. Stephen is doing something similar with his work with LPSS. Has he sold out to the corporate oil and gas sector?

The inclusion of the Chronicle article as part of Stephen’s comments makes this a more complex discussion. We are now not only looking at what Stephen feels is a bad report, but that my professional ambitions are now being interpreted through a Chronicle piece. My criticism here, and something that was not clear in the Chronicle article, is about the academy’s embrace of MOOCs. Stephen takes the “we” personally, whereas he was never the intended target of the “we”. I would love to see all media interviews and recordings posted fully with articles such as this. My use of “we” in the above quote is problematic. By “we”, I was speaking about education/hypesters/corporate entities like Udacity/Coursera. This is something that Rolin Moe also asks about.

And what is key here is that he [George, over here, still in a box] does not believe our work was based in research and evidence… He says nice things about us. But he does not believe we emphasize research and evidence.

I was making an argument that didn’t come off clearly. This is perhaps a similar failing to Stephen’s previous assertions that his work is about “making” not only reporting. I don’t believe he meant it in the way that others interpreted it. What Stephen was saying there, and I’m saying here, is that there is an approach to work (in my case research and in his case writing software) that produces hope for desirable outcomes rather than despair at seeing a seemingly inevitable techno-solutionist outcome. I’m not denying that Stephen does research. But he has placed himself in a difficult position: he doesn’t want the institution of higher education but he wants to be seen by people in the academy as someone who does the same type of work as they do. Stephen defines himself as a philosopher. His papers reflect this spirit. He doesn’t frequently subject his ideas to the traditional peer review that defines academic research (for obvious reasons – he doesn’t trust or feel that process has much value). His writing is open and transparent, however, so anyone could engage and critique if they were so inclined.

2. Research as is done in the academy today is poor

The comments above aren’t a direct engagement yet with our paper. In the second half of this post, Stephen expands on his primary concerns which are about educational research in general.

He says:

Why is this evidence bad? The sample sizes are too small for quantificational results (and the studies are themselves are inconsistent so you can’t simply sum the results). The sample is biased in favour of people who have already had success in traditional lecture-based courses, and consists of only that one teaching method. A very narrow definition of ‘outcomes’ is employed. And other unknown factors may have contaminated the results. And all these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.

Educational research is often poorly done. Research in social systems is difficult to reduce to a set of variables and relationships between those variables. Where we have large amounts of data, learning analytics can provide insight, but often require greater contextual and qualitative data. Where studies, such as Bonnie Stewart’s recent PhD, are qualitative, criticism against size can be leveraged. These are both unfair in that no single node represents the whole knowledge network. Research is a networked process of weaving together results, validating results, refuting results, and so on. It is essentially a conversation that happens through results and citations. The appeal to evidence is to essentially state that opinions alone are not sufficient. The US Department of Education has a clear articulation of what they will count as evidence for grants. It’s a bit depressing, actually, a utopia for RCTs. While Stephen says our evidence is poor, he doesn’t provide what he feels is better evidence. Where, outside of peer-reviewed articles and meta-studies, can academics, administrators, and policy makers find support and confidence to make decisions (the stated intent the introduction of our report)? What is our foundation for making decisions? If the foundation is opinions and ideas without evidence, than any edtech startup’s claim is equally valid to researchers, bloggers, and reformers. Where is the “real research being performed outside academia” and what are the criteria for calling that activity research, but what’s going on in the academy, and funded by NSF, JISC, OLT, SSHRC, as being largely trivial?

Stephen then makes an important point and one that needs to be considered that the meta-studies that we used are “hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place.” This is a significant challenge. How do we prepare for digital universities when we are largely duplicating classrooms? Where is the actual innovation? (I’d argue much of it can be fore in things like cmoocs and other technologies that we address in chapter 5 of the report). Jon Dron largely agrees with Stephen and suggests that a core problem exists in the report in that it is a “view from the inside, not from above.”

I need to reflect more on Jon’s and Stephen’s insight about research rooted in traditional classrooms and the suitability of assessing that against a networked model of education and society.

3. Our paper is bad

At this stage, Stephen turns to the paper itself. Short answer: he doesn’t like it and it’s a trivial paper. The list of what he doesn’t like is rather small actually.

At this stage of reviewing his post, I’m left with the impression that much of Stephen’s complaint about our paper is actually a discussion with himself: The Stephen that disagreed with his phd supervisory committee and the Stephen that today has exceeded the impact of members on that committee through blogging, his newsletter, presentations, and software writing. Our paper appears to be more of a “tool to think with” and enable Stephen to hold that discussion with his two selves, effectively Stephen of today affirming that the Stephen in front of the phd committee made the right decision – that there are multiple paths to research, that institutions can be circumvented and that individuals, in a networked age, have control and autonomy.

Stephen next statement is wrong: “With a couple of exceptions, these are exactly the people and the projects that are the “edtech vendors” vendors Siemens says he is trying to distance himself from. He has not done this; instead he has taken their money and put them on the committee selecting the papers that will be ‘representative’ of academic research taking place in MOOCs.”

The names listed were advisors on the MOOC Research Initiative – i.e. they provided comments and feedback on the timelines and methods. They didn’t select the papers. The actual peer review process included a much broader list, some from within the academy and some from the outside.

They do not have a background in learning technology and learning theory (except to observe that it’s a good thing).

In my previous post, I stated that we didn’t add to citations. We analyzed those that were listed in the papers that others submitted to MRI. Our analysis indicated that popular media influenced the MOOC conversation and the citations used by those who submitted to the grant. Many had a background in education. George Veltsianos shares his recent research:

Our tests showed that the MOOC literature published in 2013-2015 differed significantly from the MRI submissions: our corpus had a greater representation of authors from Computer Science and the Gašević et al., corpus had a greater representation of authors from Education and Industry. In other words, our corpus was less dominated by authors from the field of education than were the MRI submissions. One of Downes criticisms is the following: “the studies are conducted by people without a background in education.” This finding lends some support to his claim, though a lot of the research on MOOCs is from people affiliated with education, but to support that claim further one could examine the content of this papers and identify whether an educational theory is guiding their investigations.

He goes on to say that the MOOC conversation has changed and that greater interdisciplinarity now exists in research.

Final thoughts

Stephen and I have had variations of the conversation above many times. Sometimes it has centred on views of what is acceptable knowledge. At other times, on the role of academics and knowledge institutions in networks. Some discussions have been more political. At the core, however, is a common ground: an equitable society with opportunities for all individuals to make the lives that they want without institutions (and faculty in this case) blocking the realization of those dreams. We differ in how to go about achieving this. I value the legacy of universities and desire a future where they continue to play a valuable role. Stephen imagines a future of greater individual control, less boundaries, and no universities. Fundamentally, it’s a difference of how to achieve a vision that we both share.

The cost of time

Jon Dron's blog - May 1, 2015 - 10:49
A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this - that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs. 

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that's 1200 minutes - 20 hours - of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone's train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher - some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person's time: in total, that's about 40 hours of the institution's time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year. But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day -  that's 8 hours, or more than a day of the institution's time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange's insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution's time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background - webmail is a truly terrible idea for daily use - or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain't what you do it's the way that you do it.  

But wait - there's more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it's another week of the institution’s time ‘gone' every day simply waiting to read a page. And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing.  I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface.  

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There's nothing particularly peculiar to ICTs about this either - for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average). 

On Research and Academic Diversity

elearnspace (George Siemens) - April 30, 2015 - 12:15

In my previous post, I mentioned the release of our report Preparing for the Digital University. Stephen Downes responds by saying “this is a really bad study”. He may be right, but I don’t think it is for the reasons that he suggests: “What it succeeds in doing, mostly, is to offer a very narrow look at a small spectrum of academic literature far removed from actual practice”. This resulted in a Twitter exchange about missing citations and forgotten elearning history. Rolin Moe responded by saying that the history that we included in our citation analysis of MOOCs was actually the one that most non-elearning folks follow “depending on lens, Friedman Pappano & Young are more representative of who’s driving EdTech conversation”.

We took two approaches in the report: one a broad citation analysis of meta-studies in distance, online, and blended learning. This forms the first three chapters. While we no doubt missed some sources, we addressed many of the most prominent (and yes, prominence is not a statement of quality or even impact). In the fifth chapter, we evaluated the citations based on the MOOC Research Initiative, which received close to 300 submissions. We only analyzed the citations – we didn’t add to them or comment on their suitability. Instead, our analysis reflects the nature of the dialogue in academic communities. In this regard, Stephen’s criticism is accurate: the narrative missed many important figures and many important developments.

The heart of the discussion for me is about the nature of educational technology narrative. At least three strands of discourse exist: the edtech hypesters, the research literature in peer reviewed publications, and the practitioner space. These are not exclusive spaces as there is often overlap. Stephen is the most significant figure in elearning. His OLDaily is read by 10′s of thousands or readers daily – academics, students, companies. His work is influential not only in practice, as his Google Scholar profile indicates. Compare his citations with many academics in the field and it’s clear that he has an impact on both practice and research.

Today’s exchange comes against the backdrop of many conversations that I’ve had over the past few weeks with individuals in the alt-ac community. This community, certainly blogs and with folks like Bonnie Stewart, Jim Groom, D’Arcy Norman, Alan Levine, Stephen Downes, Kate Bowles, and many others, is the most vibrant knowledge space in educational technology. In many ways, it is five years ahead of mainstream edtech offerings. Before blogs were called web 2.0, there was Stephen, David Wiley, Brian Lamb, and Alan Levine. Before networks in education were cool enough to attract MacArthur Foundation, there were open online courses and people writing about connectivism and networked knowledge. Want to know what’s going to happen in edtech in the next five years? This is the space where you’ll find it, today.

What I’ve been grappling with lately is “how do we take back education from edtech vendors?”. The jubilant rhetoric and general nonsense causes me mild rashes. I recognize that higher education is moving from an integrated end-to-end system to more of an ecosystem with numerous providers and corporate partners. We have gotten to this state on auto-pilot, not intentional vision.

When technology drives education, a number of unwelcome passengers are included: focus on efficacy over impact, metrics of management, reductionist thinking, etc. To sit at the table with academics and corporate players is essentially to acquiesce to capital as a driving and motivating factor. Educators have largely been out maneuvered, as indicated by the almost luddite interpretation by media to any resistance by faculty and teachers. We can’t compete through capital at this table. So instead we have to find an additional lever for influence.

One approach is to emphasize loosely coupled networks organized by ideals through social media. This is certainly a growing area of societal impact on a number of fronts including racism, sexism, and inequality in general. In education, alt-ac and bloggers occupy this space.

Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence. At the decision making table in universities and schools, research is the only lever that I see as having comparable capacity to capital in shaping how decisions are made and how values are preserved. This isn’t to discount social networked organization or alt-ac. It is to say, however, that in my part of the world and where I am currently in my career/life, this is the most fruitful and potentially influential approach that I can adopt.

Preparing for the Digital University

elearnspace (George Siemens) - April 30, 2015 - 06:19

We’ve released a new report: Preparing for the Digital University: a review of the history and current state of distance, blended, and online learning (.pdf).

The report is an attempt to reposition the narrative of digital learning away from “look, my cool new technology does this” to something more like “here’s what we know from research and here’s what we can extrapolate”. Innovation is a bunnies and kittens type of concept – who could possibly oppose it? Sometimes new is not better, especially when it impacts the lives of people. Remember the failure of Udacity and San Jose State University project? Even passing familiarity with research in learning sciences could have anticipated the need for scaffolded social support. Instead, a large number of at-risk-students had yet another blow delivered to their confidence as learners, further entrenching negative views of their capability to success in university. This is bad innovation. It hurts people while it gains media accolades and draws VC funding. With our report, we are hoping to address exactly this type of failure by providing a research lens on how technology and learning are related in various contexts.

Five articles are included in the report and provide an overview of research literature, while a final article looks at future technology infrastructure :
- Distance education
- Blended learning
- Online learning
- Credentialing
- MOOC research
- Future learning technology infrastructures

From the introduction:

It is our intent that these reports will serve to introduce academics, administrators, and students to the rich history of technology in education with a particular emphasis of the importance of the human factors: social interaction, well-designed learning experiences, participatory pedagogy, supportive teaching presence, and effective techniques for using technology to support learning.

The world is digitizing and higher education is not immune to this transition. The trend is well underway and seems to be accelerating as top universities create departments and senior leadership positions to explore processes of innovation within the academy. It is our somewhat axiomatic assessment that in order to understand how we should design and develop learning for the future, we need to first take a look at what we already know. Any scientific enterprise that runs forward on only new technology, ignoring the landscape of existing knowledge, will be sub-optimal and likely fail. To build a strong future of digital learning in the academy, we must first take stock of what we know and what has been well researched.

Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course

Jon Dron's bookmarks - April 28, 2015 - 11:35

Well-written and intelligently argued paper from Richard W. Patterson, using an experimental (well, nearly) approach to discover the effects of a commitment device, reminder and focus tool to improve course completion and performance in a MOOC.  It seems that providing tools to support students to  pre-commit to limiting 'distracting Internet time' (and that both measures and controls this) has a striking positive effect, though largely on those that appear to be extrinsically motivated: they want to successfully complete the course, rather than to enjoy the process of learning. Reminders are pretty useless for anyone (I concur - personally I find them irritating and, after a while, guilt-inducing and thus more liable to cause procrastination) and blocking distracting websites has very little if any effect - unsurprising really, because they don't really block distractions at all: if you want to be distracted, you will simply find another way. This is good information.

It seems to me that those who have learned to be extrinsically motivated might benefit from this, though it will reinforce their dangerous predeliction, encourage bad habits, and benefit most those that have already figured out how to work within a traditional university system and that are focused on the end point rather than the journey. While I can see some superficially attractive merit in providing tools that help you to achieve your goals by managing the process, it reminds me a little of diet plans and techniques that, though sometimes successful in the short term, are positively harmful in the long term. This is the pattern that underlies all behaviourist models - it sort-of works up to a point (the course-setter's goals are complied with), but the long-term impact on the learner is generally counter-productive. This approach will lead to more people completing the course, not more people learning to love the subject and hungry to apply that knowledge and learn more. In fact, it opposes such a goal. This is not about inculcating habits of mind but of making people do things that, though they want to reach some further end as a result, they do not actually want to do and, once the stimulus is taken away, will likely never want to do again. It is far better to concentrate on supporting intrinsic motivation and to build learning activities that people will actually want to do - challenges that they feel impelled to solve, supporting social needs, over which they feel some control. For that, the instructivist course format is ill suited to the needs of most. 

Abstract

Online education is an increasingly popular alternative to traditional classroom- based courses. However, completion rates in online courses are often very low. One explanation for poor performance in online courses is that aspects of the online environ- ment lead students to procrastinate, forget about, or be distracted from coursework. To address student time-management issues, I leverage insights from behavioral economics to design three software tools including (1) a commitment device that allows students to pre-commit to time limits on distracting Internet activities, (2) a reminder tool that is triggered by time spent on distracting websites, and (3) a focusing tool that allows students to block distracting sites when they go to the course website. I test the impact of these tools in a large-scale randomized experiment (n=657) conducted in a massive open online course (MOOC) hosted by Stanford University. Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the reminder and focusing treatments are not statistically distinguishable from the control. These results suggest that tools designed to address procrastination can have a significant impact on online student performance. 

Address of the bookmark: http://www.human.cornell.edu/pam/academics/phd/upload/PattersonJMP11_18.pdf

» Assessing teachers’ digital competencies Virtual Canuck

Jon Dron's bookmarks - April 27, 2015 - 13:40

Terry Anderson on an Estonian approach to assessing teacher competences (and other projects) using Elgg - the same framework that underpins the Landing. I've downloaded the tool they have developed, Digimina, and will be trying it out, not just for exactly the purposes it was developed, but as the foundation for a more generalized toolset for sharing the process of assessment. May spark some ideas, I think.

A nice approach to methodology: Terry prefers the development of design principles as the 'ultimate' aim of design-based research (DBR), but I like the notion of software as a hypothesis that is used here. It's essentially a 'sciency' way of describing the notion of trying out an idea to see whether it works that makes no particular claims to generality, but that both derives from and feeds a model of what can be done, what needs to be done, and why it should be done. The generalizable part is not the final stage, but the penultimate stage of design in this DBR model. In this sense, it formalizes the very informal notion of bricolage, capturing some of its iterative nature. It's not quite enough, I think, any more than other models of DBR quite capture the process in all its richness. This is because the activity of formulating that hypothesis itself follows a very similar pattern at a much finer-grained scale to that of the bigger model. When building code, you try out ideas, see where it takes you, and that inspires new ideas through the process of writing as much as of designing and specifying. Shovelling that into a large-scale process model hides where at least an important amount of the innovation actually happens, perhaps over-emphasizing the importance of explicit evaluation phases and underplaying the role of construction itself.

Address of the bookmark: http://terrya.edublogs.org/2015/04/24/assessing-teachers-digital-competencies/

Assessing teachers’ digital competencies

Terry Anderson's blog - April 24, 2015 - 22:37
I had the pleasure to spend a couple of days with faculty and students at the Centre for Educational Technology at Tallinn University here in Estonia. My host Mart Laanpere, showed me a number of very interesting projects. Driven by similar motives to our work on the Athabasca Landing , they have developed LePress system […]

Nothing new here: Arizona State and edX partnership

elearnspace (George Siemens) - April 23, 2015 - 06:22

I’m learning that if you call something existing by a new name, or if you get some press, you can discover well defined concepts and claim them as your own. Today’s example: Arizona State and edX Will Offer an Online Freshman Year, Open to All

The project, called the Global Freshman Academy, will offer a set of eight courses designed to fulfill the general-education requirements of a freshman year at Arizona State at a fraction of the cost students typically pay, and students can begin taking courses without going through the traditional application process… Students who pass a final examination in a course will have the option of paying a fee of no more than $200 per credit hour to get college credit for it.

So, for $200 a credit hour ($600 for a 3-credit course), you may well pay more than you would at a small college. The fees charged then are not innovative or game changing. The idea of open access? Oh, well the OU started that in the 1960′s: Brief History of OU.

The only innovation here? Marketing & PR.

Once systems like ASU, who have launched some innovative ideas over the past decade, start looking at what has been done in education and what is known about learning, and then launch a legitimately new idea, rather than playing a PR game, we may have the prospect of substantial educational change.

Another attempt at Flexible Provision of courses

Terry Anderson's blog - March 31, 2015 - 08:58
Our friends from the Open University of the Netherlands (OUNL) have just had published a very interesting article that seems to be a first step towards helping education and training institutions re purpose their content for multiple audiences.  This is an important, yet very challenging task that requires that courses be created without a single […]

PISA and irony: The 2015 Brown Center Report on American Education

Jon Dron's bookmarks - March 27, 2015 - 08:51

It probably comes as no surprise that I have an extremely low opinion of PISA, the well-intentioned but operationally horrific international testing framework used to compare schooling (I use the word advisedly) in different countries. PISA matters to governments because it gives an apparently objective measure of the 'effectiveness' of education and it matters to the rest of us because governments' desire to score highly in PISA league tables has a massive (and catastophic) effect on systems of education. This is 'teaching to the test' at a gargantuan scale, with all the awful consequences that entails. The laudable desire to improve literacy and basic knowledge leads to the consequence that, internationally, education becomes primarily concerned with compliance, standardization and the ability to perform to someone else's criteria on command. I'd like to think that there is a bit more to it than that. The cost of literacy does not have to be dehumanization or an extrinsically driven populace, and I am quite sure that is not what OECD intends, but that is the systemic effect of these interventions. And so to this report...

This report is interesting on many levels but I would like to draw your attention to section 3, in which it is shown that there is quite a strong negative correlation between intrinsic motivation and the ability to perform well on PISA-oriented math tests, at an inter-country level. In other words, countries reporting the lower levels of intrinsic motivation tend to report higher levels of attainment (ie. test compliance).  Within a given country there is a very modest positive correlation - for instance, American kids who like math tend to do slightly better on the tests than those that do not, but it is not enough of a difference to make a difference.

The authors seem puzzled by this! I leave you to draw your own conclusions about standardized tests, grades, schools, education and government interventions. Paolo Freire and Ivan Illich would have had a field day.

Address of the bookmark: http://www.ewa.org/sites/main/files/file-attachments/brown_ctr_2015_v2.pdf

Differences between students using PLE and LMS systems

Terry Anderson's blog - March 26, 2015 - 08:03
I don’t usually comment on articles in “closed” journals, but making an exception in this case. I hope you can find it in a library data base, or one of the authors uploads it to a public site or you can “rent ” it from Wiley for 48 hours for $6! The article: Casquero, O., Ovelar, R., Romo, […]

24 inches worth

Terry Anderson's blog - March 24, 2015 - 08:10
This week I am in the process of moving my office  from Athabasca University to home. It was a lot of work sorting, selecting and shifting.  Most of the books that I THINK I still want are now on the bookshelves here at home. However, I have doubts as to their usefulness, as the texts (of […]

Top 20 in Educational Technology to Connect with through Social Media - AACE

Jon Dron's bookmarks - March 23, 2015 - 11:59

Well this is nice - I (only just) made the top 20! Nice to be counted among the notably more luminous folk even if my feeble contributions are orders of magnitude smaller. Of course, this is just about people who write about educational technology using a particular subset of social media, who have some connection with AACE, whether as committee members or keynotes, and it is an informally garnered list so it is, at best, only partially representative of the broader field.  Even so, though they would knock me and a few others off the list, I think the following would mostly qualify too (and I avidly read most of what they write):

Donald Clark

Eric Duval 

David Wiley

Dave Cormier

Audrey Watters

Johannes Cronje

There are many others who have not yet done the AACE conference circuit or who have slipped off it, but who are well worth following on social media. I did start writing a list of some but realized early on that, even if I listed 100 or more, I would still miss people that really matter. You know who you are. The interesting thing though, I think, is that if you followed and interacted with just a few of these people on social media (not just what they write but what they curate and share) you would likely learn a great deal more about learning technologies, e-learning and education as a whole than you would by wading through a dozen courses or hand-curated textbooks. Crowds teach.

Still - thanks for the boost, AACE folk!

Address of the bookmark: http://blog.aace.org/2015/03/22/top-20-in-educational-technology-to-connect-with-through-social-media/

Beyond the group: how education is changing and why institutions need to catch up

Jon Dron's blog - March 21, 2015 - 21:51

Understanding the ways people interact in an online context matters if we are interested in deliberate learning, because learning is almost always with and/or from other people: people inform us, inspire us, challenge us, motivate us, organize us, help us, engage with us. In the process, we learn. Intentional learning is now, more than ever, whether informally, non-formally or formally, an activity that occurs outside a formal physical classroom. We are no longer limited to what our schools, universities, teachers and libraries in our immediate area provide for us, nor do we need to travel and pay the costs of getting to the experts in teaching and subject matter that we need. We are not limited to classes and courses any more. We don't even need books. Anyone and everyone can be our teachers. This matters.

Traditional university education

Traditional university education is all about groups, from classes to courses to committees to cohorts (Dron & Anderson, 2014). I use the word 'group' in a distinctive and specific way here, following a pattern set by Wellman, Downes and others before and since. Groups have names, owners, members, roles and hierarchies. Groups have purposes and deliberate boundaries. Groups have rules and structures. Groups embody a large set of highly evolved mechanisms that have developed over millenia to deal with the problems of coordinating large numbers of people in physical spaces and, in the context they have evolved, they are a pretty effective solution.

But there are two big problems with using groups in their current form in online learning. The first is that the online context changes group dynamics. In the past, professors were able to effectively trap students in a room for an hour or more, and to closely control their activities throughout that time. That is the context in which our most common pedagogies evolved. Even in the closest simulations of a face-to-face context (immersive worlds or webmeetings) this is no longer possible.

The second problem is more significant and follows from the first: group technologies, from committees to classrooms, were developed in response to the constraints and affordances of physical contexts that do not exist in an online and connected world. For example, it has been a long time since the ability to be in hearing range of a speaker has mattered if we wish to understand what he or she says. Teachers needed to control such groups because, apart from anything else, in a physical context, it would have been impossible to otherwise be heard without disruption. It was necessary to avoid such disruption and to coordinate behaviour because there was no other easy way to gain the efficiencies of one person teaching many (books notwithstanding). We also had to be disciplined enough to be in the same place at the same time - this involved a lot of technologies like timetables, courses, and classroom furniture. We needed to pay close attention because there was no persistence of content. The whole thing was shaped by the need to solve problems of access to rival resources in a physical space. 

We do not all have to be together in one place at one time any more. It is no longer necessary for the teacher to have to control a group because that group does not (always or in the same way) need to be controlled.

Classrooms used to be the only way to make efficient use of a single teacher with a lot of learners to cater for, but compromises had to be made: a need for discipline, a need to teach to the norm, a need to schedule and coordinate activities (not necessarily when learners needed or wanted to learn), a need to demand silence while the teacher spoke, a need to manage interactions, a perceived need to guide unwilling learners, brought on by the need to teach things guaranteed to be boring or confusing to a large segment of a class at any given time. We therefore had to invent ways to keep people engaged, either by force or intentional processes designed to artificially enthuse. This is more than a little odd when you think about it. Given that there is hardly anything more basically and intrinsically motivating than to learn something you actually want to learn when you want to learn it, the fact that we had to figure ways to motivate people to learn suggests something went very wrong with the process. It did not go wonderfully. A whole load of teaching had worse than no effect and little resulted in persistent and useful learning - at least, little of what was intentionally taught. It was a compromise that had to be made, though. The educational system was a technology designed to make best use of limited resources and the limitations imposed by physics, without which the spread of knowledge and skills would have been (and used to be and, in pockets where education is unavailable, still is) very limited.

Online learning

For those of us who are online (you and me) we don't need to make all of those compromises any more. There are millions of other ways to learn online with great efficiency and relevance that do not involve groups at all, from YouTube to Facebook to Reddit to StackExchange, to this post. These are under the control of the learners, each at the centre of his or her own network and in control of the flow, each able to choose which sets of people to engage with, and to what attention should be paid.

Networks have no boundaries, names, roles or rules - they are just people we know.

Sets have no ties, no rituals of joining, no allegiances or social connections - they are just collections of people temporarily occupying a virtual or physical space who share similar interests without even a social network to bind them.

Sets and networks are everywhere and they are the fundamental social forms from which anyone with online access learns and they are all driven by people or crowds of people, not by designed processes and formal patterns of interaction.

Many years ago Chambers, then head of Cisco, was ridiculed for suggesting that e-learning would make email look like a rounding error. He was absolutely right, though, if not in quite the way he meant it: how many people reading this do not turn first to Google, Wikipedia or some other online, crowd-driven tool when needing or wanting to learn something? Who does not learn significant amounts from their friends, colleagues or people they follow through social networks or email? We are swimming in a sea of billions of teachers: those who inform, those with whom we disagree, those who act as role models, those who act as anti-models, those that inspire, those that affirm, those that support, those we doubt, those we trust. If there was ever a battle for supremacy between face-to-face and e-learning (an entirely artificial boundary) then e-learning has won hands down, many times over. Not so's you'd know it if you look at our universities. Very oddly, even an online university like Athabasca is largely trapped in the same constrained and contingent pattern of teaching that has its origins in the limitations of physical space as its physical counterparts. It is largely as though the fact of the Internet has had no significant impact beyond making things slightly more convenient. Odd.

Replicating the wrong things

Those of us who teach entirely online are still, on the whole, making use of the single social form of the group, with all of its inherent restrictions, hierarchies and limitations inherited from its physical ancestors. Athabasca is at least a little revolutionary in providing self-paced courses at undergraduate level (albeit rarely with much social engagement at all - its inspiration is as much the book as the classroom) , but it still typically keeps the rest of the trappings, and it uses groups like all the rest in most of its graduate level courses. Rather than maintaining discipline in classrooms through conventional means, we instead make extensive use of assessments which have become, in the absence of traditional disciplinary hierarchies that give us power in physical spaces, our primary form of control as well as the perceived primary purpose of at least higher education (the one follows from the other). It has become a transaction: if you do what I say and learn how I tell you to learn then, if you succeed, I will give you a credential that you can use as currency towards getting a job. If not, no deal. Learning and the entire process of education has become secondary to the credential, and focused upon it. We do this to replicate a need that was only there in the first place thanks to physics, not because it made sense for learning.

As alternative forms of accreditation become more commonplace and more reliable, it is hard to see us sustaining this for much longer. Badges, social recommendations, commercial credits, online portfolios, direct learning record storage, and much much more are gaining credence and value.

It is hard to see what useful role a university might play when it is not the best way to learn what you want to learn and it is not the best way to gain accreditation for your skills and knowledge.

Will universities become irrelevant? Maybe not. A university education has always been about a lot more than what is taught. It is about learning ways of thinking, habits of mind, ways of building knowledge with and learning from others. It is about being with others that are learning, talking with them, socializing with them, bumping serendipitously into new ideas and ways of being. All of this is possible when you throw a bunch of smart people together in a shared space, and universities are a good gravitational force of attraction for that. It is, and has always been, about networks and sets as much as if not more than groups. The people we meet and get to know are not just networks of friends but of knowledge. The sets of people around us, explicit and implicit, provide both knowledge and direction. And such sets and nets have to form somewhere - they are not mere abstractions. Universities are good catalysts. But that is only true as long as we actually do play this role. Universities like Athabasca focus on isolated individuals or groups in boundaried courses. Only in odd spaces like here, on the Landing, or in external social sites like Twitter, Facebook or RateMyProfessor, is there a semblance of those other roles a university plays, a chance to extend beyond the closed group and credential-focused course process.

Moving on

We can still work within the old constraints, if we think it worthwhile - I am not suggesting we should suddenly drop all the highly evolved methods that worked in the past at once. Like a horse and cart or a mechanical watch, education still does the job it always did, in ways that more evolved methods will never not replicate, any more than folios beat scrolls or cars beat horses. There will be both gains and losses as things shift. Like all technologies (Kelly, 2010), the old ways of teaching will never go away completely and will still have value for some.  Indeed, they might retain quite a large niche for many years to come. 

But now we can do a whole lot more as well and instead, and the new ways work better, on the whole. In a competitive ecosystem, alternatives that work better will normally come to dominate. All the pieces are in place for this to happen: it is just taking us a little while to collectively realize that we don't need the trainer-wheels any more. Last gasp attempts to revamp the model, like first-generation xMOOCs, merely serve to illustrate the flaws in the existing model, highlighting in sharp relief the absurdities of adopting group-based forms on an Internet-based scale. imposing structural forms designed to keep learners on track in physical classrooms have no sense or meaning when applied to a voluntary, uncredentiallled and interest-driven course. I think we can do better than that.

The key steps are to disaggregate learning and assessment, and to do away with uniform courses with fixed schedules and pre-determined processes and outcomes. Outsiders, from MOOC providers (they are adapting fast) to publishers are beginning to realize this, as are a few universities like WGU.

It is time to surf the adjacent possible (Kauffman, 2000), to discover ways of learning with others that take advantage of the new horizons, that are not trapped like horseless carriages replicating the limitations of a bygone era. Furthermore, we need to learn to build new virtual environments and learning ecosystems in ways that do not just mimic patterns of the past, but that help people to learn in more flexible, richer ways that take advantage of the freedoms they enable - not personalized (with all the power assertion that implies) but both personal and social. If we build tools like learning management systems or the first generation xMOOC environments like edX, that are trapped into replicating traditional classroom-bound forms, we not only fail to take advantage of the wealth of the network, but we actually reinforce and ossify the very things we are reacting against rather than opening up new vistas of pedagogical opportunity. If we sustain power structures by linking learning and formal assessment, we hobble our capacity to teach. If we enclose learning in groups that are defined as much by who they exclude as who they encompass (Shirky, 2003) then we actively prevent the spread of knowledge. If we design outcome-based courses on fixed schedules, we limit the potential for individual control, and artificially constrain what need not be constrained.

Not revolution but recognition of what we already do

Any and all of this can change. There have long been methods for dealing with the issues of uniformity in course design and structure and/or tight integration of summative assessment to fixed norms, even within educational institutions. European-style PhDs (the ones without courses), portfolio-based accreditation (PLAR, APEL, etc), challenge exams, competency-based 'courses',  open courses with negotiable outcomes, assessments and processes (we have several at AU), whole degrees by negotiated learning outcomes, all provide different and accepted ways to do this and have been around for at least decades if not hundreds of years. Till recently these have mostly been hard to scale and expensive to maintain. Not any more. With the growth of technologies like OpenBadges, Caliper and xAPI, there are many ways to record and accredit learning that do not rely on fixed courses, pre-designed outcomes-based learning designs and restrictive groups. Toolsets like the Landing, Mahara or LPSS provide learner-controlled ways to aggregate and assemble both the process and evidence of learning, and to facilitate the social construction of knowledge - to allow the crowd to teach - without demanding the roles and embodied power structures of traditional learning environments. By either separating learning and accreditation or by aligning accreditation with individual learning and competences, it would be fairly easy to make this change and, whether we like it or not, it will happen: if universities don't do it, someone else will. 

All of traditional education is bound by historical constraint and path dependencies. It has led to a vast range of technologies to cope, such as terms and semesters, libraries, classrooms, courses, lessons, exams, grading, timetables, curricula, learning objectives, campuses, academic forms and norms in writing, disciplinary divisions and subdivisions, textbooks, rules and disciplinary procedures, avoidance of plagiarism, homework, degrees, award ceremonies and a massive range of other big and small inventions and technologies that have nothing whatsoever to do with learning.

Nothing at all.

All are contingent. They are simply a reaction to barriers and limitations that made good sense while those barriers existed. Every one of them is up for question. We need to imagine a world in which any or all of these constraints can be torn down. That is why we need to think about different social forms, that is why we continue to build the Landing, that is why we continue to explore the ways that learning is evolving outside the ivory tower, that is why we are trying to increase learner control in our courses (even if we cannot yet rid ourselves of all their constraints), that is why we are exploring alternative and open forms of accreditation. It is not just about doing what we have always done in slightly better, more efficient ways. Ultimately, it is about expanding the horizons of education itself. Education is not about courses, awards, classes and power hierarchies. Education is about learning. more accurately, it is about technologies of learning - methods, tools, processes, procedures and techniques. These are all inventions, and inventions can be superseded and improved. Outside formal institutions, this has already begun to happen. It is time we in universities caught up.

References

Dron, J., & Anderson, T. (2014). Teaching crowds: social media and distance learning. Athabasca: AU Press. 

Kauffman, S. (2000). Investigations (Kindle ed.). New York: Oxford University Press. 

Kelly, K. (2010). What Technology Wants (Kindle ed.). New York: Viking. 

Shirky, C. (2003). A Group Is Its Own Worst Enemy. Retrieved from http://www.shirky.com/writings/group_enemy.html

 

 

 

Time to change education again: let's not make the same mistakes this time round

Jon Dron's blog - March 21, 2015 - 20:56

We might as well start with exams

In case anyone missed it, one of countless examples of mass cheating in exams is being reported quite widely, such as at http://www.ctvnews.ca/world/hundreds-expelled-in-india-for-cheating-on-pressure-packed-exams-1.2289032.

The videos are stunning (Chrome and Firefox users - look for the little shield or similar icon somewhere in or near you browser's address field to unblock the video. IE users will probably have a bar appearing in the browser asking if you want to trust the site - you do. Opera, Konqueror and Safari users should be able to see the video right away), e.g.:

https://www.youtube.com/watch?v=L7iMgRPJYnQ&spfreload=10

As my regular readers will know, my opinions of traditional sit-down, invigilated, written exams could not be much lower. Sitting in a high-stress environment, unable to communicate with anyone else, unable to refer to books or the Internet, with enormous pressure to perform in a fixed period to do someone else's bidding, in an atmosphere of intense powerlessness, typically using a technology you rarely encounter anywhere else (pencil and paper), knowing your whole future depends on what you do in the next 3 hours, is a relatively unusual situation to find yourself in outside an exam hall. It is fair enough for some skills - journalism, for example, very occasionally leaves you in similar conditions. But, if it actually is an authentic skill needed for a particular field, then it should be explicitly taught and, if we are serious about it, it should probably be examined under truly authentic conditions (e.g. for a journalist, in a hotel room, cafe, press room, or trench). This is seldom done. It is not surprising, therefore, that exams are an extremely poor indicator of competence and an even worse indicator of teaching effectiveness. By and large, they assess things that we do not teach.

If that were all, I might not be so upset with the idea - it would just be weird and ineffective. However, exams are not just inefficient in a system designed to teach, they are positively antagonistic to learning. This is an incredibly wasteful tragedy of the highest order. Among the most notable of the many ways that they oppose teaching are that:

  • they shift the locus of control from the learner to the examiner
  • they shift the focus of attention from the activity to the accreditation
  • they typically punish cooperation and collaboration
  • they typically focus on content rather than performance
  • they typically reward conformity and punish creativity
  • they make punishments or rewards the reasons for performing, rather than the love of the subject
  • they are unfair - they reward exam skills more than subject skills.

In short, the vast majority of unseen written exams are deeply demotivating (naysayers, see footnote), distract attention away from learning, and fail to discriminate effectively or fairly. They make the whole process of learning inefficient, not just in the wasted time and energy involved surrounding the examination itself, but in (at the very least) doubling the teaching effort needed just to overcome their ill effects. Moreover, especially in the sciences and technologies, they have a strong tendency to reinforce and encourage ridiculous content-oriented ways of teaching that map some abstract notion of what a subject is concerned with to exercises that relate to that abstract model, rather than to applied practices, problem solving and creative synthesis - i.e. the things that really matter.  The shortest path for an exam-oriented course is usually bad teaching and it takes real creativity and a strong act of will to do otherwise. Professional bodies are at least partly culpable for such atrocities.

There is one and only one justification for 99% of unseen written exams that makes any sense at all, which is that it allows us to relatively easily and with some degree of assurance (if very expensively, especially given the harmful effects on learning) determine that the learner receiving accreditation is the one that has learned. It's not the only way, but it is one of them. That sounds reasonable enough. However, as examples like this show in very sharp relief, exams are not particularly good at that either. If you create a technology that has a single purpose of preventing cheating, then cheats (bearing in mind that the only thing we have deliberately and single-mindedly taught them from start to finish is that the single purpose of everything they do is to pass an exam) will simply find better ways to cheat - and they do so, in spades. There is a whole industry dedicated to helping people to cheat in exams, and it evolves at least as fast as the technologies that we use to prevent it. At least twenty percent of students in North America admit to having at some point in the last year cheated in exams. Some studies show much higher rates overall - 58% of high school students in Canada, for example.  It is hard to think of a more damning indictment of a broken system than this. The problem is likely even worse in other regions of the world. For instance, Davis et al (2009) reckon a whopping 83% of Chinese and 70% of Russian schoolkids cheat on exams. Let me repeat that: only 17% of Chinese people claim never to have cheated in an exam. See a previous post of mine for some intriguing examples of how that happens. When something that most people believe to be wrong is so deeply endemic, it is time to rethink the whole thing. No amount of patching over and tweaking at the edges is going to fix this.

But it's not just exams

This is part of a much broader problem, and it is a really simple and obvious one: if you teach people that accreditation rather than learning is the purpose of education, especially if such accreditation makes a massive difference to what kind and quality of life they might have as a result of having or not having it, then it is perfectly reasonable that they should find better ways of achieving accreditation, rather than better ways of learning. Even most of our 'best' students, the ones that put in some of the hardest work, tend to be focused on the grades first and foremost, because that is our implicit and/or explicit subtext. To my shame, I'm as guilty as anyone of having used grades to coerce: I have been known to annoy my students with a little song that includes the lines 'If a good mark is what you seek, blog, blog, blog, every week'.  Even if we assume that student will not cheat (and, on the whole, mature students like those that predominate at Athabasca U do not cheat, putting the lie to the nonsense some have tried to promote about distance education leading to more cheating) it challenges teachers to come up with ways of constructively aligning assessment and learning, so that assessment actually contributes to rather than detracts from learning. With skill and ingenuity, it can be done, but it is hard work and an uphill struggle. We really shouldn't have to be doing that in the first place because learning is something that all humans do naturally and extremely willingly when not pressured to do so. We don't need to be forced to do what we love to do. We love the challenge, the social value, the control it brings. In fact, forcing us to do things that we love always takes away some or all of the love we feel for them. That's really sad. Educational systems make the rods that beat themselves.

Moving forwards a little

We can start with the simple things first. I think that there are ways to make exams much less harmful. My friend and colleague Richard Huntrods, for example, simply asks students to reflect about what they have done on his (open, flexible and learner-centred) course. The students know exactly what they will be asked to do in advance, so there is no fear of the unknown, and there is no need for frantic revising because, if they have done the work, they can be quite assured of knowing everything they need to know already. It is a bit odd not to be able to talk with others or refer to notes or the Web, but that's about all that is inauthentic. This is a low-stress approach that demands nothing more than coming to an exam centre and writing about what they have done, which is an activity that actually contributes substantially to effective learning rather than detracting from it. It is constructively aligned in a quite exemplary way and would be part of any effective learning process anyway, albeit not at an exam centre.  It is still expensive, it still creates a bit more stress for students who have learned to fear exams, but it makes sense if we feel we don't know our students well enough or we do not trust them enough to credit them for the work they have done. Of course, it demands a problem- or enquiry-based, student-centred pedagogy in the first place. This would not be effective for a textbook wraparound or other content-centric course. But then, we should not be writing those anyway as little is more certain to discourage a love of learning, a love of the subject, or a satisfying learning experience. 

There are plenty of exam-like things that can make sense, in the right kind of context, when approached with care: laboratory exercises, driving tests, and other experiences that closely resemble those of the practice being examined, for example, are quite sensible approaches to accreditation that are aligned with and can even be supportive of the learning process. There are also ways of doing exams that can markedly reduce the problems associated with them, such as allowing conversation and the use of the Internet, open-book papers that allow students to come and go as needed, questions that challenge students to creatively solve problems, exams that use questions created by the students themselves, oral exams that allow examiners to have a useful learning dialogue with examinees, and so on. There are different shades of grey and not all are as awful as the worst, by any means. There are other ways that tend to work better - for instance, badges, portfolios, and many other approaches that allow us to demonstrate competence rather than compliance, that rely on us coming to know our students, and that allow multiple approaches and different skills to be celebrated - but not all exam-like things are as bad as the worst of them.

And, of course, if we avoid exams altogether then we can do much more useful things, like involving students in creating the assignments; giving feedback instead of grades for work done; making the work relevant to student needs, allowing multiple paths, different evidence; giving badges for achievement, not to goad it, etc, etc. There's a book or two in what we can do to limit the problems though, ultimately, this can only take us so far because, looming at the end of every learning path at an institution, is the accreditation. And therein lies the rub.

Moving forwards a lot

The central problem that we have to solve is not so much the exam itself as the unbreakable linkage of teaching and accreditation. Exams are just a symptom of a flawed system taken to its obvious and most absurd conclusion. But all forms of accreditation that become the purpose of learning are carts driving horses. I recognize and celebrate the value of authentic and meaningful accreditation, but there is no reason whatsoever that learning and accreditation should be two parts of the same system, let alone of the same process.  It it were entirely clear that the purpose of taking a course (or any other learning activity - courses are another demon we need to think carefully about) were to learn, rather than to succeed in a test, then education would work a great deal better. We would actually be able to do things that support learning, rather than that support credit scores; to give feedback that leads to improvement, rather than as a form of punishment or reward; to allow students to expand and explore pathways that diverge rather than converge; to get away from our needs and to concentrate on those of our students; to support people's growth rather than to stunt it by setting false goals; to valorize creativity and ingenuity; to allow people to gain the skills they actually need rather than those we choose to teach; to empower them, rather than to become petty despots ourselves. And, in an entirely separate process of assessment that teachers may have little or anything to do with at all, we could enable multiple ways to demonstrate learning that are entirely dissociated from the process. Students might use evidence from learning activities we help them with as something to prove their competence, but our teaching would not be focused on that proof. It's a crucial distinction that makes all the difference in the world.  This is not a revolutionary idea about credentialling - it's exactly what many of the more successful and enlightened companies already do when hiring or promoting people: they look at the whole picture presented, take evidence from multiple sources, look at the things that matter in the context of application, and treat each individual as a human being with unique strengths, skills and weaknesses, given the evidence available. Credentials from institutions may be part of that right now, but there is no reason for that idea to persist and plenty of alternative ways of showing skills and knowledge that are becoming increasingly popular and significant, from social network recommendations to open badges to portfolios. In fact, we even have pockets of such processes well entrenched within universities. Traditional British PhDs, for example, while they are examined through the thesis and an oral exam (a challenging but flexible process), are examined on evidence that is completely unique to the individual student. Students may target the final assessment a bit, but the teaching itself is not much focused on that. Instead, it is on helping them to do what they want to do. And, of course, there are no grades involved at all - only feedback.

Conclusion

It's going to be a long slow struggle to change the whole of the educational system across most of the world, especially as there's a good portion of the world that would be delighted to have these kinds of problems in the first place. We need education before we can have cheating. But we do need to change this, and exams are a good place to start. It changed once before, with far less research to support the change, and far weaker technologies and communication to enable it. And it changed recently. In the grand scheme of things, the first ever university exam of the kind we now recognize as almost universal was the blink of an eye ago. The first ever written exam of the kind we use now (not counting a separate branch for the Chinese Civil Service that began a millenium before) was at the end of the 18th Century (the Cambridge Tripos) and it was only near the end of the 19th Century that written exams began to gain a serious foothold. This was within the lifetime of my grandparents. This is not a tradition steeped in history - it's an invention that appeared long after the steam engine and only became significant as the internal combustion engine was born.  I just hope institutions like ours are not heading back down the tunnel or standing still, because those heading into the light are going to succeed while those that stay in the shadows will at best become the laughing stock of the world. 

On the subject of which, do watch the video. It is kind-of funny in a way, but the humour is very dark and deeply tragic. The absurdity makes me want to laugh but the reality of how this crazy system is wrecking people's lives makes me want to cry. On balance, I am much more saddened and angered by it than amused. These are not bad people: this is a bad system. 

ReferenceDavis, S., Drinan, P., and Gallant, T. (2009). Cheating in School: What We Know and What We Can Do. West Sussex, UK: Wiley-Blackwell.

Footnote

I know some people will want to respond that the threat or reward of assessment is somehow motivating. If you are one of those, this postscript is for you. 

I understand what you are saying. That is what many of us were taught to believe and it is one way we justify persisting despite the evidence that it doesn't work very well. I agree that it is motivating, after a fashion, very much like paying someone to do something you want them to do, or hitting them if they don't. Very much indeed. You can create an association between a reward/punishment and some other activity that you want your subject to perform and, as long as that association persists, you might actually make them do it. Personally speaking, I find that quite offensive, not to mention only mildly effective at achieving its own limited ends, but each to their own. But notice how you have replaced the interest in the activity with an interest in the reward and/or the desire to avoid punishment. Countless research studies from several fields have pretty conclusively shown that both reward and punishment are strongly antagonistic to intrinsic motivation and, in many cases, actually destroy it altogether. So, you can make someone do something by destroying their love of doing it - good job. But that doesn't make a lot of sense to me, especially as what they have learned is presumably meant to be of ongoing value and interest, to help them in their lives. It is my belief that, if you want to teach effectively, you should never make people learn anything - you should support them in doing so if that is what they want to do. It is good to encourage and enthuse them so that they want to do it and can see the value - that's a useful teacher role - but it's a whole different ballgame altogether to coerce them. Alas, it is very hard to avoid it altogether until we change education, and that's one good reason (I hope you agree) we need to do that.

For further information, you could do worse that to read pretty much anything by Alfie Kohn. If you are seeking a broader range of in-depth academic work, try the Self Determination Theory site.