Skip To Content

Technology Enhanced Knowledge Research Institute (TEKRI)

TEKRI blogs

Guest Post on Learning Analytics

Terry Anderson's blog - July 22, 2015 - 06:57
My friends at Open University of Catalonia asked me to do a guest blog on learning analytics for their Open Thought 2015 series. As always a deadline got a little creativity and some emergy going and though I am no expert on analytics, I gathered a few thoughts for the post at http://openthoughts-analytics.blogs.uoc.edu/big-data-learning-analytics-and-distance-education/

Employability and quality of life

elearnspace (George Siemens) - July 11, 2015 - 15:40

The employability narrative for higher education is over powering. While I certainly agree that work is important, I think the framework of “getting a job” is too limiting for the role that higher education (can and should) play in society. I had the privilege recently to deliver a talk to a group of folks at HERDSA in Australia on this topic. My argument: employability is important, but quality of life is more critical as a long term focus. Slides are below.

Exploiting emerging technologies to enable quality of life from gsiemens

Boundaries and Hierarchies in Complex Systems

Jon Dron's bookmarks - July 9, 2015 - 10:25

This rather elderly paper by Paul Cilliers peters off to an unsatisfyingly vague and obvious conclusion, but it does have some quite useful clarifications and observations about the nature of boundaries as they relate to hierarchies, networks and complex systems in general. I particularly like:

"We often fall into the trap of thinking of a boundary as something that separates one thing from another. We should rather think of a boundary as something that constitutes that which is bounded. "

This simple observation leads to further thoughts on how we choose those boundaries and the (necessary) ways we create models that make use of them. The thing is, we are the creators of those boundaries, at least in any complex system - Cilliers mentions neural networks as a good example - so what we choose to model is always determined by us and, like any model, it is and must be a partial representation, not an analogue, of the impossible complexities of the world it models. In a very real sense, we shape our understanding of the world through the boundaries that we choose to (or are hard-wired to) consider significant and there are always other places to draw those boundaries that change the meaning of what we are observing. It makes the analysis of complex systems quite hard, because we can seldom see beyond the boundaries we create that simplify the complexity in them and we have a tendency to over-simplify: as he points out, even apparently clear hierarchies shift and interpenetrate one another. This is more than, though related to, categories and metaphors of the sort examined by the likes of Hofstadter or Lakoff.

Since this paper was written, John Holland has done some mind-bending and deeply thought-provoking work on signals and boundaries in complex systems that delves far deeper and that begins to address the problem head-on, but which I have been struggling to understand properly for many months: I'm pretty certain that Holland is onto something of staggering importance, if I could only grasp precisely what that might be! He is not the clearest of writers and he tends to leave a lot unsaid and assumed, that the reader has to fill in. It's also complicated stuff - suffice to say, stochastic urns play a significant role. This paper by Cilliers is a good stab at the issue from a high altitude philosophical perspective that makes a few of the wicked and profound issues quite clear.

Address of the bookmark: http://blogs.cim.warwick.ac.uk/complexity/wp-content/uploads/sites/11/2014/02/Cilliers-2001-Boundaries-Hierarchies-and-Networks.pdf

The LMS as a paywall

Jon Dron's blog - July 3, 2015 - 09:52

I was writing about openness in education in a chapter I am struggling with today, and had just read Tony Bates's comments on iQualify, an awful cloud rental service offering a monolithic locked-in throwback that just makes me exclaim, in horror, 'Oh good grief! Seriously?' And it got me thinking.

Learning management systems, as implemented in academia, are basically paywalls. You don't get in unless you pay your fees. So why not pick up on what publishers infamously already do and allow people to pay per use? In a self-paced model like that used at Athabasca it makes perfect sense and most of the infrastructure - role-based time-based access etc - and of course the content already exists. Not every student needs 6 months of access or the trimmings of a whole course but, especially for those taking a challenge route (just the assessment), it would often be useful to have access to a course for a little while in order to get a sense of what the expectations might be, the scope of the content, and the norms and standards employed. On occasion, it might even be a good idea to interact with others. Perhaps we could sell daily, weekly or monthly passes. Or we could maybe do it at a finer level of granularity too/instead: a different pass for different topics, or different components like forums, quizzes or assignment marking. Together, following from the publishers' lead, such passes might cost 10 or 20 times the total cost of simply subscribing to a whole course if every option were purchased, but students could strategically pick the parts they actually need, so reducing their own overall costs.

This idea is, of course, stupid. This is not because it doesn't make economic and practical sense: it totally does, notwithstanding the management, technical and administrative complexity it entails. It is stupid because it flips education on its head. It makes chunks of learning into profit centres rather than the stuff of life. It makes education into a product rather than celebrating its role as an agent of personal and societal growth. It reduces the rich, intricately interwoven fabric of the educational experience to a set of instrumentally-driven isolated events and activities. It draws attention to accreditation as the be-all and end-all of the process. It is aggressively antisocial, purpose-built to reduce the chances of forming a vibrant learning community. This is beginning to sound eerily familiar. Is that not exactly what, in too high a percentage of our courses, we are doing already?

If we and other universities are to survive and thrive, the solution is not to treat courses and accreditation as products or services. The ongoing value of a university is to catalyze the production and preservation of knowledge: that is what we are here for, that is what makes us worthwhile having. Courses are just tools that support that process, though they are far from the only ones, while accreditation is not even that: it's just a byproduct, effluent from the educational process that happens to have some practical societal value (albeit at enormous cost to learning). In physical universities there are vast numbers of alternatives that support the richer purpose of creating and sustaining knowledge: cafes, quads, hallways, common rooms, societies, clubs, open lectures, libraries, smoking areas, student accommodation, sports centres, theatres, workshops, studios, research labs and so on. Everywhere you go you are confronted with learning opportunities and people to learn with and from, and the taught courses are just part of the mix, often only a small part. At least, that is true in a slightly idealized world - sadly, the vast majority of physical universities are as stupidly focused on the tools as we are, so those benefits are an afterthought rather than the main thing to celebrate, and are often the first things to suffer when cuts come along. Online, such beyond-the-course opportunities are few and far between: the Landing is (of course) built with exactly that concern in mind, but there's precious little sign of it anywhere else at AU, one of the most advanced online universities in the world.  The nearest thing most students get to it is the odd Facebook group or Twitter interaction, which seems an awful waste to me, though a fascinating phenomenon that blurs the lines between the institution and the broader community.

It is already possible to take a high quality course for free in almost any subject that interests you and, more damagingly, any time now there will soon be sources of accreditation that are as prestigious as those awarded by universities but orders of magnitude cheaper, not to mention compellingly cut-price  options from universities that can leverage their size and economies of scale (and, perhaps, cheap labour) to out-price the rest of us. Competing on these grounds makes no sense for a publicly funded institution the role of which is not to be an accreditation mill but to preserve, critique, observe, transform and support society as a whole. We need to celebrate and cultivate the iceberg, not just its visible tip. Our true value is not in our courses but in our people (staff and students) and the learning community that they create.

Personal Learning Graphs (PLeG)

elearnspace (George Siemens) - July 2, 2015 - 07:15

Personalized and adaptive learning has been described as the so-called holy grail of education. The idea is not new, though its technological instantiation is getting increased attention. In a well-funded education system, personalized instruction happens when guided by a teacher as each students strengths and weaknesses and knowledge gaps are known. However, when classrooms start to exceed 20+ students, some type of mediating agent is needed in order to address knowledge gaps as it becomes impossible for a teacher to be aware of what is happening with each learner. So, while the human educator is the original (and best) personalized learning system, the current funding constraints and other resource challenges have raised the need for alternative approaches to make sure that each learner is receiving support reflective of her needs.

Many of the personalized learning systems now available begin with an articulation of the knowledge space – i.e. what the learner needs to know. What the learner knows is somewhat peripheral and is only a focal point after the learner has started interacting with content. Additionally, the data that is built around learner profiles is owned by either the educational institution or the software company. This isn’t a good idea. Learners should own the representation of what they know.

Last year, I posted on personalized learner knowledge graphs. Since then, I’ve been working with several colleagues to refine and develop this idea. Embedded below is a summary of our recent thinking on what this would look like in practice. Personal Learning Graph (PLeG – pronounced ‘pledge’ (acronyms are hard)) is intended as a response to how work and life are changing due to technology and the importance of individuals owning their own learning representation.

Personal Learning Graph (PLeG)

Niggles about NGDLEs - lessons from ELF

Jon Dron's blog - June 28, 2015 - 12:05

Malcom Brown has responded to Tony Bates and me in an Educause guest post in which he defends the concept of the NGDLE and expands a bit on the purposes behind it. This does help to clarify the intent although, as I mentioned in my earlier post, I am quite firmly in favour of the idea, so I am already converted on the main points. I don't mind the Lego metaphor if it works, but I do think we should concentrate more on the connections than the pieces. I also see that it is fairly agnostic to pedagogy, at least in principle. And I totally agree that we desperately need to build more flexible, assemblable systems along these lines if we are to enable effective teaching, management of the learning process and, much much more importantly, if we are to support effective learning. Something like the proposed environment (more of an ecosystem, I'd say) is crucial if we want to move on.

But...

It has been done before, over ten years ago in the form of ELF, in much more depth and detail and with large government and standards bodies supporting it, and it is important to learn the lessons of what was ultimately a failed initiative. Well - maybe not failed, but certainly severely stalled. Parts persist and have become absorbed, but the real value of it was as a model for building tools for learning, and that model is still not as widespread as it should be. The fact that the Educause initiative describes itself as 'next generation' is perhaps the most damning evidence of its failure.

Why ELF 'failed'

I was not a part of nor close to the ELF project but, as an outsider, I suspect that it suffered from four major and interconnected problems:

  1. It was very technically driven and framed in the language of ICTs, not educators or learners. Requirements from educators were gathered in many ways, with workshops, working groups and a highly distributed team of experts in the UK, Australia, the US, Canada, the Netherlands and New Zealand (it was a very large project). Some of the central players had a very deep understanding of the pedagogical and organizational needs of not just learners but organizations that support them, and several were pioneers in personal learning environments (PLEs) that went way beyond the institution. But the focus was always on building the technical infrastructure - indeed, it had to be, in order to operationalize it. For those outside the field, who had not reflected deeply on the reasons this was necessary, it likely just seemed like a bunch of techies playing with computers. It was hard to get the message across.
  2. It was far too over-ambitious, perhaps bolstered by the large amounts of funding and support from several governments and large professional bodies. The e-learning framework was just one of several strands like e-science, e-libraries and so on, that went to make up the e-framework. After a while, it simply became the e-framework and, though conceptually wonderful, in practical terms it was attempting far too much in one fell swoop. It became so broad, complex and fuzzy that it collapsed under its own weight. It was not helped by commercial interests that were keen to keep things as proprietary and closed as they could get away with. Big players were not really on board with the idea of letting thousands of small players enter their locked-in markets, which was one of the avowed intents behind it. So, when government funding fizzled out, there was no one to take up such a huge banner. A few small flags might have been way more successful.
  3. It was too centralized (oddly, given its aggressively decentralized intent and the care taken to attempt to avoid that). With the best of intent, developers built over-engineered standards relying on web service architectures that the rest of the world was abandoning because they were too clunky, insufficiently agile and much too troublesome to implement. I am reminded, when reading many of the documents that were produced at the time, of the ISO OSI network standards of the late 80s that took decades to reach maturity through ornate webs of committees and working groups, were beautifully and carefully engineered, and that were thoroughly and completely trounced by the lighter, looser, more evolved, more distributed TCP/IP standards that are now pretty much ubiquitous. For large complex systems, evolution beats carefully designed engineering every single time.
  4. The fact that it was created by educators whose framing was entirely within the existing system meant that most of the pieces that claimed to relate to e-learning (as opposed to generic services) had nothing to do with learning at all, but were representative of institutional roles and structures: marking, grading, tracking, course management, resource management, course validation, curriculum, reporting and so on. None of this has anything to do with learning and, as I have argued on many occasions elsewhere, may often be antagonistic to learning. While there were also components that were actually about learning, they tended to be framed in the context of existing educational systems (writing lessons, creating formal portfolios, sequencing of course content, etc). Though very much built to support things like PLEs as well as institutional environments, the focus was the institution far more than the learner.

As far as I can tell, any implementation of the proposed NGDLE is going to run into exactly the same problems. Though the components described are contemporary and the odd bit of vocabulary has evolved a bit, all of them can be found in the original ELF model and the approach to achieving it seems pretty much the same. Moreover, though the proposed architecture is flexible enough to support pretty much anything - as was ELF - there is a tacit assumption that this is about education as we know it, updated to support the processes and methods that have been developed since (and often in response to) the heinous mistakes we made when we designed the LMSs that dominate education today. This is not surprising - if you ask a bunch of experts for ideas you will get their expertise, but you will not get much in the way of invention or new ideas. The methodology is therefore almost guaranteed to miss the next big thing. Those ideas may come up but they will be smoothed out in an averaging process and dissenting models will not become part of the creed. This is what I mean when I criticize it as a view from the inside.

Much better than the LMS

If implemented, a NGDLE will undoubtedly be better than any LMS, with which there are manifold problems. In the first place, LMSs are uniformly patterned on mediaeval educational systems, with all their ecclesiastic origins, power structures and rituals intact. This is crazy, and actually reinforces a lot of things we should not be doing in the first place, like courses, intimately bound assessment and accreditation, and laughably absurd attempts to exert teacher control, without the slightest consideration of the fact that pedagogies determined by the physics of spaces in which we lock doors and keep learners controlled for an hour or two at a time make no sense whatsoever in online learning. In the second place, centralized systems have to maintain an uneasy and seldom great balance between catering to every need and remaining usably simple. This inevitable leads to compromises, from small things (e.g. minor formatting annoyances in discussion forums) to the large (e.g. embedded roles or units of granularity that make everything a course). While customization options can soften this a little, centralized systems are structurally flawed by their very nature. I have discussed such things in some depth elsewhere, including both my published books. Suffice to say, the LMS shapes us in its own image, and its own image is authoritarian, teacher-controlled and archaic. So, a system that componentizes things so that we can disaggregate any or all of it, provide local control (for teachers and other learners as well as institutions and administrators) and allow creative assemblies is devoutly to be wished for. Such a system architecture can support everything from the traditional authoritarian model to the loosest of personal learning environments, and much in between.

Conclusion

NGDLE is a misnomer. We have already seen that generation come and go. But, as a broad blueprint for where we should be going and what we should be doing now, both ELF and NGDLE provide patterns that we should be using and thinking about whenever we implement online learning tools and content and, for that, I welcome it. I am particularly appreciative that NGDLE provides reinvigorated support for approaches that I have been pushing for over a decade but that ICT departments and even faculty resist implacably. It's great to be able to point to the product of so many experts and say 'look, I am not a crank: this is a mainstream idea'. We need a sea-change in how we think of learning technologies and such initiatives are an important part of creating the culture and ethos that lets this happen. For that I totally applaud this initiative.

In practical terms, I don't think much of this will come from the top-down, apart from in the development of lightweight, non-prescriptive standards and the norming of the concepts behind it. Of current standards, I think TinCan is hopeful, though I am a bit concerned that it is becoming over-ornate in its emerging development. LTI is a good idea, sufficiently mature, and light enough to be usable but, again, in its new iteration it is aiming higher than might be wise. Caliper is OK but also showing signs of excessive ambition. Open Badges are great but I gather that is becoming less lightweight in its latest incarnation. We need more of such things, not more elaborate versions of them. Unfortunately, the nature of technology is that it always evolves towards increasing complexity. It would be much better if we stuck with small, working pieces and assembled those together rather than constantly embellishing good working tools. Unix provides a good model for that, with tools that have worked more or less identically for decades but that constantly gain new value in recombination.

Footnote: what became of ELF?

It is quite hard to find information about ELF today. It seems (as an outsider) that the project just ground to a halt rather than being deliberately killed. There were lots of exemplar projects, lots of hooks and plenty of small systems built that applied the idea and the standards, many of which are still in use today, but it never achieved traction. If you want to find out more, here is a small reading list:

http://www.elframework.org/ - the main site (the link to the later e-framework site leads to a broken page)

http://www.elframework.org/projects.html - some of the relevant projects ELF incorporated.

https://web.archive.org/web/20061112235250/http://www.jisc.ac.uk/uploaded_documents/Altilab04-ELF.pdf - good, brief overview from 2004 of what it involved and how it fitted together

 https://web.archive.org/web/20110522062036/http://www.jisc.ac.uk/uploaded_documents/AltilabServiceOrientedFrameworks.pdf - spooky: this is about 'Next Generation E-Learning Environments' rather than digital ones. But, though framed in more technical language, the ideas are the same as NGDLE.

http://www.webarchive.org.uk/wayback/archive/20110621221935/http://www.elearning.ac.uk/features/nontechguide2 - a slightly less technical variant (links to part 1, which explains web services for non-technical people)

See also https://web.archive.org/web/20090330220421/http://www.elframework.org/general/requirements/scenarios/Scenario%20Apparatus%20UK%205%20(manchester%20lipsig).doc and https://web.archive.org/web/20090330220553/http://www.elframework.org/general/requirements/use_cases/EcSIGusecases.zip, a set of scenarios and use cases that are eerily similar to those proposed for NGDLE.

If anyone has any information about what became of ELF, or documents that describe its demise, or details of any ongoing work, I'd be delighted to learn more!

 

 

My Retirement Week

Terry Anderson's blog - June 16, 2015 - 20:05
I am just recovering this week from a busy, celebration filled week last that I want to share with my blog friends. The week started by a quick trip to Barcelona, where besides being able to watch FBT Barcelona win the final Champion League match, I was honoured being made a Senior Fellow in the […]

The death of Athabasca University has been greatly exaggerated

elearnspace (George Siemens) - June 11, 2015 - 04:19

I keep hearing rumours about Athabasca University dying or at least being on its deathbed. I guess stories like this don’t help: AU taskforce releases sustainability report. This article was picked up by Tony Bates, who states: “So Athabasca University is now in the same position as the Greek government, except it doesn’t have the EU, the IMF, or the Germans to look to for help – just the Alberta government, which itself has been fiscally devastated by the collapse of oil prices.”

I’m conflicted by Tony’s response. He has forgotten more about digital learning than most of us will ever know. He has a global view of the sector and has been in the trenches as a leader. His diagnoses of how AU’s problems came about resonates with the discussion that I have heard. Unfortunately, Tony also adds needless rhetoric to a situation that has qualitatively changed with the new government in Alberta. His comments don’t reflect the new Alberta context.

I don’t know all the behind the scenes discussions relating to this report. I don’t know the specific intent of the sustainability report. My impression from what I’ve read (see: Athabasca University’s Hostage Crisis and Athabasca University facing insolvency, Alberta government may have to step in) is that this is a political game (request for more funding) that is being played in the public sphere.

It’s amply clear that governments are divesting from public education. The defining challenge of our time is inequality. Any sufficiently advanced and civilized society should ensure a) healthcare is available to all and b) an education is available to all. Education is not the goal. The goal is a populace is able to improve their position in life and to live life on the terms of success that they define for themselves. Education is the best way that we have of doing this today.

The education system we are building today is failing to enable opportunities and is instead a system of hardening power structures and socio-economic positions. President Truman anticipated this, nearly 70 years ago:

If the ladder of educational opportunity rises high at the doors of some youth and scarcely rises at the doors of others, while at the same time formal education is made a prerequisite to occupational and social advance, then education may become the means, not of eliminating race and class distinctions, but of deepening and solidifying them

Publicly funded online universities (such as AU, OU, OUNL) have to date been the most successful systems in enabling educational access to learners who do not fit the traditional learner profile. While a number of traditional universities have recently started using the rhetoric of targeting and supporting under-represented students, open universities have been doing it since the 1960s (and some systems prior to that).

I hope that the new Alberta provincial leaders, and counterparts globally, would recognize and support the revolutionary and real life impact that open universities have had on the quality of life of many learners. It’s discouraging to see that at the exact point where many state and provincial leaders around the world start to recognize the need for improving education access, those systems that have been serving this mission for 50 years risk being cut off at the knees by limited vision and appropriate government support. Fortunately, early indications suggest that the government in Alberta is starting to listen: “”I take this situation seriously,” Sigurdson [Advanced Education Minister] told Metro. “The Alberta government is ready to work with the university and help it become more sustainable.”"

Providing audio feedback to students: Review of a review

Terry Anderson's blog - May 30, 2015 - 11:37
I’ve always been interested in studies that help us differentiate both pedagogies and educational technology use, based upon time requirements. These studies of course should include all the actors – too often student time is taken as a free given. Thus, a recent publication by Gusman Edouard tweaked my interest. Edouard, G. (2015). Effectiveness of […]

Why so many questions?

Jon Dron's blog - May 28, 2015 - 19:59

At Athabasca University, our proposed multi-million dollar investment in a student relationship management system, dubbed the 'Student Success Centre' (SSC), is causing quite a flood of discussion and debate among faculty and tutors at the moment. Though I do see some opportunities in this if (and only if) it is very intelligently and sensitively designed, there are massive and potentially fatal dangers in creating such a thing.  See a previous post of mine for some of my worries. I have many thoughts on the matter, but one thing strikes me as interesting enough to share more widely and, though it has a lot to do with the SSC, it also has broader implications.

Part of the justification for the SSC is that an alleged 80% of current interactions with students are about administrative rather than academic issues. I say 'alleged' because such things are notoriously hard to measure with any accuracy. But let's assume that it actually is accurate.

How weird is that?

Why is it that our students (apparently) need to contact us for admin support in overwhelming numbers but actually hardly talk at all about the complicated subjects they are taking? Assuming that these 80% of interactions are not mostly to complain about things that have gone wrong (if so, an SSC is not the answer!) then it seems, on the face of it, more than a bit topsy turvy. One reasonable explanation might be that our course materials are so utterly brilliant that they require little further interaction, but I am not convinced that this sufficiently explains the disparity. Students are mostly spending 100+ hours on academic work for each course whereas (I hope) at most a couple of hours are spent on administrivia. No matter how amazing our courses might be, the difference is remarkable. It is doubly remarkable when you consider that a fair number of our courses do involve at least some required level of interaction which, alone, should easily account for most if not more than all of that remaining 20%. In my own courses it is a lot more than that and I am aware of many others with very active Landing groups, Moodle forums, webinar sessions, and even the occasional visit to an immersive world. It is also possible that our administrative processes are extremely opaque and ill-explained. This certainly accords with my own experience of trying to work out something as simple as how much a course would cost or the process needed to submit project work. But, if that is the case, and assuming our distance, human-free teaching works as well as we believe it does, then why can we not a) simplify the processes and b) provide equally high quality learning materials for our admin processes so that students don’t need to bother our admin staff so much? If our course materials are so great then that would seem, on the face of it, very much more cost-effective than spending millions on a system that is at least as likely to have a negative as a positive impact and that actually increases our ongoing costs considerably. It is also quite within the capabilities of our existing skillset. Even so, it seems very odd to me that students can come to terms with inordinately complex subjects from philosophy to biochemistry, but that they are foiled by a simple bit of bureaucracy and need to seek human assistance. It may be hard, but it is not beyond the means of a motivated learner to discover, especially given that we are specialists in producing high quality learning materials that should make such things very clear. And in motivation, I think, lies the key.  

Other people matter

Other people are wonderful things when you need to learn something, pretty much across the board. Above all they matter when there is no obvious reason that you should be interested or care about it for its own merits, and bureaucratic procedures are seldom very interesting. I have known only one person in my whole life that actually likes filling in forms (I think it is a meditative pursuit - my father felt much the same way about dishwashing and log sawing) but, for the most part, this is not a thing that excites most people.   I hypothesize that our students tend to need less academic than bureaucratic help at least partly because, by and large, for the coursework they are very self-motivated people learning things that interest them whereas our bureaucracy is at most a means to an end, at worst a demotivating barrier. It would not help much to provide great teaching materials for bureaucratic procedures because 99% of students would have no intrinsic interest in learning about them, and it would have zero value to them in any future activity. Why would they bother? It is far easier to ask someone.  Our students actually like the challenge of facing and solving problems in their chosen subjects - in fact, that's one of the great joys of learning. They don't turn to tutors to discuss things because there are plenty of other ways of getting the help they need, both in course materials and elsewhere, and it is fun to overcome obstacles. The more successful ones tend to have supportive friends, families or colleagues, or are otherwise very single-minded. They tend to know why they are doing what they are doing. We don't get many students that are not like this, at least on our self-paced courses, because either they don't bother coming in the first place or they are among the scarily large percentage that drop out before starting (we don't count them in our stats though, in fairness, neither to face-to-face universities). But, of course, that only applies for students that do really like the process of learning and most of what they are learning, that know how to do it and/or that have existing support networks. It does not apply to those that hit very difficult or boring spots, that give up before they start, that hit busy times that mean they cannot devote the energy to the work, that need a helping hand with the process but cannot find it elsewhere, or that don't bother even looking at a distance option at all because they do not like the isolation it (apparently) entails. For those students, other people can help a lot. Even for our own students, over half (when asked) claim that they would appreciate more human interaction. And those are the ones that have knowingly self-selected a largely isolated process and that have not already dropped out.  Perhaps more worryingly, it raises concerns about the quality of the learning experience. Doing things alone means that you miss out on all the benefits of a supportive learning community. You don't get to argue, to explain, to question, save in your own head or in formal, largely one-way, assignments. You don't get multiple perspectives, different ways of seeing, opportunities to challenge and be challenged. You don't get the motivation of writing for an audience of people that you care about. You don't get people that care about you and the learning community providing support when times are hard, nor the pleasure of helping when others are in difficulty. You don't get to compare yourself with others, the chance to reflect on how you differ and whether that is a good or bad thing. You don't get to model behaviours or see those behaviours being modelled. These are just some of the notable benefits of traditional university systems that are relatively hard to come by in Athabasca's traditional self-paced model (not in all courses, but in many). It's not at all about asking questions and getting solutions. It's about engaging in a knowledge creation process with other people. There are distinct benefits of being alone, notably in the high degree of control it brings, but a bit of interaction goes a long long way. It takes a very special kind of person to get by without that and the vast majority of our successful students (at least in undergraduate self-paced courses) are exactly that special kind of person.  If it is true that only 20% of interactions are currently concerned with academic issues, that is a big reason for concern, because it means our students are missing out on an incredibly rich set of opportunities in which they can help one another as well as interact with tutors. Creating an SSC system that supports what is therefore, for those that are not happy alone (i.e. the ones we lose or never get in the first place), an impoverished experience, seems simply to ossify a process that should at least be questioned. It is not a solution to the problem - it is an exacerbation of it, further entrenching a set of approaches and methods that are inadequate for most students (the ones we don't get or keep) in the first place.

A sustainable future?

As a university seeking sustainability we could simply continue to concentrate on addressing the needs of self-motivated, solitary students that will succeed almost no matter what we do to them, and just make the processing more cost-efficient with the SSC.  If we have enough of those students, then we will thrive for some time to come, though I can’t say it fits well with our open mission and I worry greatly about those we fail to help. If we want to get more of those self-guided students then there are lots of other things we should probably do too like dropping the whole notion of fixed-length courses (smaller chunks means the chances of hitting the motivation sweet-spot are higher) and disaggregating assessment from learning (because extrinsic motivation kills intrinsic motivation). But, if we are sticking with the idea of traditional courses, the trouble is that we are no longer almost alone in offering such things and there is a finite market of self-motivated, truly independent learners who (if they have any sense) will find cheaper alternatives that offer the same or greater value. If all we are offering is the opportunity to learn independently and a bit of credible certification at the end of it, we will wind up competing on price with institutions and businesses that have deeper coffers, cheaper staff, and less constraints. In a cut-throat price war with better funded peers, we are doomed. If we are to be successful in the future then we need to make more of the human side of our teaching, not less, and that means creating richer, more direct channels to other people in this learning community, not automating methods that are designed for the era of correspondence learning. This is something that, not uncoincidentally, the Landing is supposed to help with, though it is just an exemplar and at most a piece of the puzzle - we ideally want connection to be far more deeply embedded everywhere rather than in a separate site. It is also something that current pilot implementations of the SSC are antagonistic towards, thanks mainly to equating time and effort, focusing on solving specific problems rather than human connection, failing to support technological diversity, and standing as an obstacle between people that just need to talk. It doesn't have to be built that way. It could almost as easily vanish into the background, be seamlessly hooked into our social environments like email, Moodle and the Landing, could be an admin tool that gives support when needed but disappears when not. And there is no reason whatsoever that it needs to be used to pay tutors by the recorded minute, a bad idea that has been slung on the back of it that has no place in our culture. Though not what the pilot systems do at all, a well-designed system like this could step in or be called upon when needed, could support analytics that would be genuinely helpful, could improve management information, all without getting in the way of interaction at all. In fact, it could easily be used to enhance it, because it could make patterns of dialogue more visible and comprehensible. 

In conclusion

At Athabasca we have some of the greatest distance educators and researchers on the planet, and that greatness rubs off on those around them. As a learning community, knowledge spreads among us and we are all elevated by it. We talk about such things in person, in meetings, via Skype, in webinars, on mailing lists, on the Landing, in pubs, in cafes, etc. And, as a result, ideas, methods and values get created, transformed and flow through our network. This makes us quite unique - as all learning communities are unique - and creates the distinctive culture and values of our university that no other university can replicate. Even when people leave, they leave traces of their ideas and values in those that remain, that get passed along for long after they have gone, become part of the rich cultural identity that defines us. It's not mainly about our structures, processes and procedures: except when they support greater interaction, those actually get in the way much of the time. It's about a culture and community of learning. It's about the knowledge that flows in and through this shifting but identifiable crowd. This is a large part of what gives us our identity. It's exactly the same kind of thing that means we can talk about (say) the Vancouver Canucks or Apple Inc as a meaningful persistent entity, even though not one of the people in the organization is the same as when it began and virtually all of its processes, locations, strategies and goals beyond the most basic have changed, likely many times. The thing is, if we hide those people behind machines and processes, separate them through opaque hierarchies, reduce the tools and opportunities for them to connect, we lose almost all of the value. The face of the organization becomes essentially the face of the designer of the machine or the process and the people are simply cogs implementing it. That's not a good way forward, especially as there are likely quite a few better machine and process designers out there. Our people - staff and students - are the gold we need to mine, and they are also the reason we are worth saving. We need to be a university that takes the distance out of distance learning, that connects, inspires, supports and nurtures both its staff and its students. Only then will we justly be able to claim to have a success centre.

 

Digital Learning Research Network Conference

elearnspace (George Siemens) - May 21, 2015 - 06:37

I’ve been working with several colleagues on arranging the upcoming Digital Learning Research Network (dLRN) conference at Stanford, October 16-17, 2015. The call for papers is now open. We are looking for short abstracts – 250 words – on topics of digital learning. The deadline is May 31. Our interest is to raise the nuance and calibre of the discussion about education in a digital era; one where hype and over-promising the power of technology has replaced structured interrogation of the meaning of changes that we are experiencing. We have a great lineup of speakers confirmed and are expanding the list rapidly. The conference will include social scientists, activists, philosophers, researchers, and rabble rousers. It will be an intentionally eclectic mix of people, institutions, and ideas as we explore the nodes that are weaving the network of education’s future. Representation from the following research organizations has already confirmed from: Stanford, Smithsonian, University of Michigan, University of Edinburgh, Columbia University, CMU, state systems (Georgia, California, Texas, and Arkansas), and SRI.

Join us for what will be a small (max 150 people) and exciting exploration of a) what education is becoming, b) who we (as learners, activists, and academics) are, and c) where these two intersect in forming the type of learning system that will enable us to create the type of society that we want for future generations.

For a more thoughtful analysis of the conference and our call for submissions, see Bonnie Stewart, Kate Bowles, and Kristen Eshleman

From the call:

Learning introduces students to practices of sensemaking, wayfinding, and managing uncertainty. Higher education institutions confront the same experiences as they navigate changing contexts for the delivery of services. Digital technologies and networks have created a new sense of scale and opportunity within global higher education, while fostering new partnerships focused on digital innovation as a source of sustainability in volatile circumstances. At the same time, these opportunities have introduced risks in relation to the ethics of experimentation and exploitation, emphasizing disruption and novelty and failing to recognise universities’ long-standing investment in educational research and development.

Scientists: Earth Endangered by New Strain of Fact-Resistant Humans

Jon Dron's bookmarks - May 13, 2015 - 09:28

"The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them."

Marvellous.

Address of the bookmark: http://www.newyorker.com/humor/borowitz-report/scientists-earth-endangered-by-new-strain-of-fact-resistant-humans

Retirement

Terry Anderson's blog - May 3, 2015 - 23:44
This month I turn 65 and of course had to try out the Howoldbot to confirm it. Much to my amazement, it got my age correct (minus 10 days). Well, the picture was taken a couple of years ago, so I guess I am an early maturer!   Reaching this milestone has triggered my long standing […]

BusinessTown

Jon Dron's bookmarks - May 3, 2015 - 11:01

Richard Scarry meets silicon valley. Wonderful and true.

Address of the bookmark: http://welcometobusinesstown.tumblr.com/

The Linearity of Stephen Downes. Or a tale of two Stephens

elearnspace (George Siemens) - May 3, 2015 - 10:22

Stephen Downes responds to my previous post: “I said, “the absence of a background in the field is glaring and obvious.” In this I refer not only to specific arguments advanced in the study, which to me seem empty and obvious, but also the focus and methodology, which seem to me to be hopelessly naïve.”

Stephen makes the following points:
1. George has recanted his previous work and is now playing the academic game
2. Research as is done in the academy today is poor
3. Our paper is bad.

Firstly, before I respond to three points, I want to foreground an interesting aspect of Stephen’s dialogue in this post. I’m going to call it “academic pick-up artist” strategy (i.e. tactics to distract from the real point of engagement or to bring your target into some type of state of emotional response). I first encountered this approach by the talented Catherine Fitzpatrick (Prokofy Neva) during CCK08. Here’s how it works: employ strategies that are intended to elicit an emotional response but don’t quite cross over into ad hominen attacks. The language is at times dismissive, humorous, and aggressive. In Stephen’s case, he uses terms such as: hopelessly naïve, recant his previous work, a load of crap, a shell game, a con game, trivial, muddled mess, nonsense. These flamboyant terms have an emotional impact that is not about the research and don’t advance the conversation toward resolution or even shared understanding. I’ll try to avoid responding in a similar spirit, but I’ll admit that it is not an easy temptation to resist.

Secondly, Stephen makes some statements about me personally. He is complimentary in his assessment of me as a person. I have known Stephen since he did a keynote in Regina in 2001. I’ve followed his work since and have greatly valued his contributions to our field and his directness. I count him as a friend and close collaborator. I enjoy differences of opinion and genuinely appreciate and learn from his criticism. (do a “George Siemens” search on OLDaily – he has provided many learning opportunities for me).

Stephen says a few things about my motivations that require some clarification, specifically that I am trying to make an academic name for myself and that I am recanting previous work. I honestly don’t care about making an academic name for myself. I am motivated by doing interesting things that have an impact on access to learning and quality of learning for all members of society. I am a first in family degree completer – as an immigrant and from low socio-economic status. There are barriers that exist for individuals in this position: psychologically, emotionally, and economically. Higher education provides a critical opportunity for people to move between the economic-social strata of society. When access is denied, society becomes less equitable and hope dims. My interest in preparing for digital universities is to ensure that opportunities exist, equity is fostered, and that democratic and engaged citizenry are fostered. The corporatization of higher education is to be resisted as values of “profit making” are at often in conflict with values of “equity and fairness”. I want my children to inherit a world that is more fair and more just than what my generation experienced.

I will return later to Stephen’s assertion that I am recanting previous work.

1. George has recanted his previous work and is now playing the academic game

With academic pickup artistry and my motivations foregrounded, I’ll turn to Stephen’s assertions.

It has in recent years been engaged in a sustained attack on the very idea of the MOOC and alternative forms of learning not dependent on the traditional model of the professor, the classroom, and the academic degree. It is resisting, for good reason, incursions from the commercial sector into its space, but as a consequence, clinging to antiquated models and approaches to research.

This get at the heart of views that Stephen and I have discussed on numerous occasions. I believe in the value of the professoriate. In this instance, he is Illich to my Friere. As I interpret Stephen’s work, he would like to see all learning opportunities and control shift to the individual and sees limited value in the higher education system that is as much about preserving faculty positions as it is about preserving the academy. Stephen and I both resist commercialization of education but vary in how we want to see the university of the future. Stephen wants a university model without universities. This comes, I believe, from his unfortunate experiences in doing his phd where his supervisory panel played a hard heavy hand in determining what is and isn’t research that they valued. I’m sure his experience isn’t unique.

Faculty can be stunning idiots when it comes to preserving and perpetuating their egos. The pursuit of knowledge and advocacy for equity often takes a seat to ego and the goal building a faculty “mini me” who is expected to pick up a research stream done by a panel or department and toe the line. In contrast to Stephen’s views, I love universities. I want a future of more, not less, universities. Universities are not perfect, but they are the best model that we currently have to enable individuals to improve their position in life and a power structure that exists to counter and comment on the corporate and government power structures. Can these goals be realized by networks of individuals (i.e. the second superpower)? If the world was populated with primarily Stephens, then it might be possible. For many people, however, education is not a goal in itself, but rather a means to employment. Systems are needed to preserve and perpetuate the highest ideals of society. If left to chance, then the views of the most aggressive will become the norm. While society slept, many of the wealthiest were busy creating a tax system that preserved their resources and created inequity. In the past, unions existed to serve as an organizing structure to advocate for the rights of individual works. Stephen would argue that we could today do this organizing and democracy preserving work through networks. I agree that networks are important, but argue that institutions are a type of network that has been configured to better meet these needs. Some structure is needed. Perhaps not as much as we see today in universities, but a minimum level or organization is required in order to provide learning opportunities to society’s disenfranchised. Simply giving people access is not enough. Social, scaffolded, and structured support is needed.

Perhaps as a result, part of what Siemens has had to do in order to adapt to that world has been to recant his previous work… This recantation saddens me for a variety of reasons. For one this, we – Siemens and myself and others who were involved in the development of the MOOC – made no such statements. In the years between 2008, when the MOOC was created, and 2011, when the first MOOC emerged from a major U.S. university, the focus was on innovation and experimentation in a cautious though typically exuberant attitude.

I haven’t recanted my previous work. Stephen displays a linearity of thought, of cause/effect, that confuses me. I see the world in networked structures. Learning is about network making at neuronal, conceptual, and external levels. Knowledge is networked. The history of ideas is networked. I don’t see a “one or the other” approach to research, to corporate involvement in education, or to learning in general. Instead, I see 3D lattice-like network structure that have multiple dimensions and connections between those dimensions.

Siemens has moved over to that camp, now working with EdX rather than the connectivist model we started with… Again, these rash and foolish statements [from Agarwal] are coming from a respected university professor, a scion of the academy, part of this system Siemens is now attempting to join.

I disagree with this statement, largely because I have privileged access to my own thinking. In this instance, and at least one prior when I did a talk at Online Educa many years ago and he stated that I had become fully corporate, Stephen is putting me in a box. Nobody puts George in a box! I am part of the academy in terms of employment. I am part of the academy by nature of grant writing and research. I am part of the academy in terms of publishing with my peers. But I am not only a one-dimensional entity. I did not take a traditional academic route. My publication history is not typical. Many of my citations come from open public works rather than traditional publications. To say that I have recanted prior work is simply not true. I am bringing my previous work into a different context – one that allows for networks and university structures to exist. Stephen is doing something similar with his work with LPSS. Has he sold out to the corporate oil and gas sector?

The inclusion of the Chronicle article as part of Stephen’s comments makes this a more complex discussion. We are now not only looking at what Stephen feels is a bad report, but that my professional ambitions are now being interpreted through a Chronicle piece. My criticism here, and something that was not clear in the Chronicle article, is about the academy’s embrace of MOOCs. Stephen takes the “we” personally, whereas he was never the intended target of the “we”. I would love to see all media interviews and recordings posted fully with articles such as this. My use of “we” in the above quote is problematic. By “we”, I was speaking about education/hypesters/corporate entities like Udacity/Coursera. This is something that Rolin Moe also asks about.

And what is key here is that he [George, over here, still in a box] does not believe our work was based in research and evidence… He says nice things about us. But he does not believe we emphasize research and evidence.

I was making an argument that didn’t come off clearly. This is perhaps a similar failing to Stephen’s previous assertions that his work is about “making” not only reporting. I don’t believe he meant it in the way that others interpreted it. What Stephen was saying there, and I’m saying here, is that there is an approach to work (in my case research and in his case writing software) that produces hope for desirable outcomes rather than despair at seeing a seemingly inevitable techno-solutionist outcome. I’m not denying that Stephen does research. But he has placed himself in a difficult position: he doesn’t want the institution of higher education but he wants to be seen by people in the academy as someone who does the same type of work as they do. Stephen defines himself as a philosopher. His papers reflect this spirit. He doesn’t frequently subject his ideas to the traditional peer review that defines academic research (for obvious reasons – he doesn’t trust or feel that process has much value). His writing is open and transparent, however, so anyone could engage and critique if they were so inclined.

2. Research as is done in the academy today is poor

The comments above aren’t a direct engagement yet with our paper. In the second half of this post, Stephen expands on his primary concerns which are about educational research in general.

He says:

Why is this evidence bad? The sample sizes are too small for quantificational results (and the studies are themselves are inconsistent so you can’t simply sum the results). The sample is biased in favour of people who have already had success in traditional lecture-based courses, and consists of only that one teaching method. A very narrow definition of ‘outcomes’ is employed. And other unknown factors may have contaminated the results. And all these criticisms apply if you think this is the appropriate sort of study to measure educational effectiveness, which I do not.

Educational research is often poorly done. Research in social systems is difficult to reduce to a set of variables and relationships between those variables. Where we have large amounts of data, learning analytics can provide insight, but often require greater contextual and qualitative data. Where studies, such as Bonnie Stewart’s recent PhD, are qualitative, criticism against size can be leveraged. These are both unfair in that no single node represents the whole knowledge network. Research is a networked process of weaving together results, validating results, refuting results, and so on. It is essentially a conversation that happens through results and citations. The appeal to evidence is to essentially state that opinions alone are not sufficient. The US Department of Education has a clear articulation of what they will count as evidence for grants. It’s a bit depressing, actually, a utopia for RCTs. While Stephen says our evidence is poor, he doesn’t provide what he feels is better evidence. Where, outside of peer-reviewed articles and meta-studies, can academics, administrators, and policy makers find support and confidence to make decisions (the stated intent the introduction of our report)? What is our foundation for making decisions? If the foundation is opinions and ideas without evidence, than any edtech startup’s claim is equally valid to researchers, bloggers, and reformers. Where is the “real research being performed outside academia” and what are the criteria for calling that activity research, but what’s going on in the academy, and funded by NSF, JISC, OLT, SSHRC, as being largely trivial?

Stephen then makes an important point and one that needs to be considered that the meta-studies that we used are “hopelessly biased in favour of the traditional model of education as practiced in the classrooms where the original studies took place.” This is a significant challenge. How do we prepare for digital universities when we are largely duplicating classrooms? Where is the actual innovation? (I’d argue much of it can be fore in things like cmoocs and other technologies that we address in chapter 5 of the report). Jon Dron largely agrees with Stephen and suggests that a core problem exists in the report in that it is a “view from the inside, not from above.”

I need to reflect more on Jon’s and Stephen’s insight about research rooted in traditional classrooms and the suitability of assessing that against a networked model of education and society.

3. Our paper is bad

At this stage, Stephen turns to the paper itself. Short answer: he doesn’t like it and it’s a trivial paper. The list of what he doesn’t like is rather small actually.

At this stage of reviewing his post, I’m left with the impression that much of Stephen’s complaint about our paper is actually a discussion with himself: The Stephen that disagreed with his phd supervisory committee and the Stephen that today has exceeded the impact of members on that committee through blogging, his newsletter, presentations, and software writing. Our paper appears to be more of a “tool to think with” and enable Stephen to hold that discussion with his two selves, effectively Stephen of today affirming that the Stephen in front of the phd committee made the right decision – that there are multiple paths to research, that institutions can be circumvented and that individuals, in a networked age, have control and autonomy.

Stephen next statement is wrong: “With a couple of exceptions, these are exactly the people and the projects that are the “edtech vendors” vendors Siemens says he is trying to distance himself from. He has not done this; instead he has taken their money and put them on the committee selecting the papers that will be ‘representative’ of academic research taking place in MOOCs.”

The names listed were advisors on the MOOC Research Initiative – i.e. they provided comments and feedback on the timelines and methods. They didn’t select the papers. The actual peer review process included a much broader list, some from within the academy and some from the outside.

They do not have a background in learning technology and learning theory (except to observe that it’s a good thing).

In my previous post, I stated that we didn’t add to citations. We analyzed those that were listed in the papers that others submitted to MRI. Our analysis indicated that popular media influenced the MOOC conversation and the citations used by those who submitted to the grant. Many had a background in education. George Veltsianos shares his recent research:

Our tests showed that the MOOC literature published in 2013-2015 differed significantly from the MRI submissions: our corpus had a greater representation of authors from Computer Science and the Gašević et al., corpus had a greater representation of authors from Education and Industry. In other words, our corpus was less dominated by authors from the field of education than were the MRI submissions. One of Downes criticisms is the following: “the studies are conducted by people without a background in education.” This finding lends some support to his claim, though a lot of the research on MOOCs is from people affiliated with education, but to support that claim further one could examine the content of this papers and identify whether an educational theory is guiding their investigations.

He goes on to say that the MOOC conversation has changed and that greater interdisciplinarity now exists in research.

Final thoughts

Stephen and I have had variations of the conversation above many times. Sometimes it has centred on views of what is acceptable knowledge. At other times, on the role of academics and knowledge institutions in networks. Some discussions have been more political. At the core, however, is a common ground: an equitable society with opportunities for all individuals to make the lives that they want without institutions (and faculty in this case) blocking the realization of those dreams. We differ in how to go about achieving this. I value the legacy of universities and desire a future where they continue to play a valuable role. Stephen imagines a future of greater individual control, less boundaries, and no universities. Fundamentally, it’s a difference of how to achieve a vision that we both share.

The cost of time

Jon Dron's blog - May 1, 2015 - 10:49
A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this - that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs. 

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that's 1200 minutes - 20 hours - of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone's train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher - some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person's time: in total, that's about 40 hours of the institution's time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year. But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day -  that's 8 hours, or more than a day of the institution's time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange's insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution's time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background - webmail is a truly terrible idea for daily use - or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain't what you do it's the way that you do it.  

But wait - there's more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it's another week of the institution’s time ‘gone' every day simply waiting to read a page. And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing.  I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface.  

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There's nothing particularly peculiar to ICTs about this either - for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average). 

On Research and Academic Diversity

elearnspace (George Siemens) - April 30, 2015 - 12:15

In my previous post, I mentioned the release of our report Preparing for the Digital University. Stephen Downes responds by saying “this is a really bad study”. He may be right, but I don’t think it is for the reasons that he suggests: “What it succeeds in doing, mostly, is to offer a very narrow look at a small spectrum of academic literature far removed from actual practice”. This resulted in a Twitter exchange about missing citations and forgotten elearning history. Rolin Moe responded by saying that the history that we included in our citation analysis of MOOCs was actually the one that most non-elearning folks follow “depending on lens, Friedman Pappano & Young are more representative of who’s driving EdTech conversation”.

We took two approaches in the report: one a broad citation analysis of meta-studies in distance, online, and blended learning. This forms the first three chapters. While we no doubt missed some sources, we addressed many of the most prominent (and yes, prominence is not a statement of quality or even impact). In the fifth chapter, we evaluated the citations based on the MOOC Research Initiative, which received close to 300 submissions. We only analyzed the citations – we didn’t add to them or comment on their suitability. Instead, our analysis reflects the nature of the dialogue in academic communities. In this regard, Stephen’s criticism is accurate: the narrative missed many important figures and many important developments.

The heart of the discussion for me is about the nature of educational technology narrative. At least three strands of discourse exist: the edtech hypesters, the research literature in peer reviewed publications, and the practitioner space. These are not exclusive spaces as there is often overlap. Stephen is the most significant figure in elearning. His OLDaily is read by 10′s of thousands or readers daily – academics, students, companies. His work is influential not only in practice, as his Google Scholar profile indicates. Compare his citations with many academics in the field and it’s clear that he has an impact on both practice and research.

Today’s exchange comes against the backdrop of many conversations that I’ve had over the past few weeks with individuals in the alt-ac community. This community, certainly blogs and with folks like Bonnie Stewart, Jim Groom, D’Arcy Norman, Alan Levine, Stephen Downes, Kate Bowles, and many others, is the most vibrant knowledge space in educational technology. In many ways, it is five years ahead of mainstream edtech offerings. Before blogs were called web 2.0, there was Stephen, David Wiley, Brian Lamb, and Alan Levine. Before networks in education were cool enough to attract MacArthur Foundation, there were open online courses and people writing about connectivism and networked knowledge. Want to know what’s going to happen in edtech in the next five years? This is the space where you’ll find it, today.

What I’ve been grappling with lately is “how do we take back education from edtech vendors?”. The jubilant rhetoric and general nonsense causes me mild rashes. I recognize that higher education is moving from an integrated end-to-end system to more of an ecosystem with numerous providers and corporate partners. We have gotten to this state on auto-pilot, not intentional vision.

When technology drives education, a number of unwelcome passengers are included: focus on efficacy over impact, metrics of management, reductionist thinking, etc. To sit at the table with academics and corporate players is essentially to acquiesce to capital as a driving and motivating factor. Educators have largely been out maneuvered, as indicated by the almost luddite interpretation by media to any resistance by faculty and teachers. We can’t compete through capital at this table. So instead we have to find an additional lever for influence.

One approach is to emphasize loosely coupled networks organized by ideals through social media. This is certainly a growing area of societal impact on a number of fronts including racism, sexism, and inequality in general. In education, alt-ac and bloggers occupy this space.

Another approach, and one that I see as complimentary and not competitive, is to emphasize research and evidence. At the decision making table in universities and schools, research is the only lever that I see as having comparable capacity to capital in shaping how decisions are made and how values are preserved. This isn’t to discount social networked organization or alt-ac. It is to say, however, that in my part of the world and where I am currently in my career/life, this is the most fruitful and potentially influential approach that I can adopt.

Preparing for the Digital University

elearnspace (George Siemens) - April 30, 2015 - 06:19

We’ve released a new report: Preparing for the Digital University: a review of the history and current state of distance, blended, and online learning (.pdf).

The report is an attempt to reposition the narrative of digital learning away from “look, my cool new technology does this” to something more like “here’s what we know from research and here’s what we can extrapolate”. Innovation is a bunnies and kittens type of concept – who could possibly oppose it? Sometimes new is not better, especially when it impacts the lives of people. Remember the failure of Udacity and San Jose State University project? Even passing familiarity with research in learning sciences could have anticipated the need for scaffolded social support. Instead, a large number of at-risk-students had yet another blow delivered to their confidence as learners, further entrenching negative views of their capability to success in university. This is bad innovation. It hurts people while it gains media accolades and draws VC funding. With our report, we are hoping to address exactly this type of failure by providing a research lens on how technology and learning are related in various contexts.

Five articles are included in the report and provide an overview of research literature, while a final article looks at future technology infrastructure :
- Distance education
- Blended learning
- Online learning
- Credentialing
- MOOC research
- Future learning technology infrastructures

From the introduction:

It is our intent that these reports will serve to introduce academics, administrators, and students to the rich history of technology in education with a particular emphasis of the importance of the human factors: social interaction, well-designed learning experiences, participatory pedagogy, supportive teaching presence, and effective techniques for using technology to support learning.

The world is digitizing and higher education is not immune to this transition. The trend is well underway and seems to be accelerating as top universities create departments and senior leadership positions to explore processes of innovation within the academy. It is our somewhat axiomatic assessment that in order to understand how we should design and develop learning for the future, we need to first take a look at what we already know. Any scientific enterprise that runs forward on only new technology, ignoring the landscape of existing knowledge, will be sub-optimal and likely fail. To build a strong future of digital learning in the academy, we must first take stock of what we know and what has been well researched.

Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course

Jon Dron's bookmarks - April 28, 2015 - 11:35

Well-written and intelligently argued paper from Richard W. Patterson, using an experimental (well, nearly) approach to discover the effects of a commitment device, reminder and focus tool to improve course completion and performance in a MOOC.  It seems that providing tools to support students to  pre-commit to limiting 'distracting Internet time' (and that both measures and controls this) has a striking positive effect, though largely on those that appear to be extrinsically motivated: they want to successfully complete the course, rather than to enjoy the process of learning. Reminders are pretty useless for anyone (I concur - personally I find them irritating and, after a while, guilt-inducing and thus more liable to cause procrastination) and blocking distracting websites has very little if any effect - unsurprising really, because they don't really block distractions at all: if you want to be distracted, you will simply find another way. This is good information.

It seems to me that those who have learned to be extrinsically motivated might benefit from this, though it will reinforce their dangerous predeliction, encourage bad habits, and benefit most those that have already figured out how to work within a traditional university system and that are focused on the end point rather than the journey. While I can see some superficially attractive merit in providing tools that help you to achieve your goals by managing the process, it reminds me a little of diet plans and techniques that, though sometimes successful in the short term, are positively harmful in the long term. This is the pattern that underlies all behaviourist models - it sort-of works up to a point (the course-setter's goals are complied with), but the long-term impact on the learner is generally counter-productive. This approach will lead to more people completing the course, not more people learning to love the subject and hungry to apply that knowledge and learn more. In fact, it opposes such a goal. This is not about inculcating habits of mind but of making people do things that, though they want to reach some further end as a result, they do not actually want to do and, once the stimulus is taken away, will likely never want to do again. It is far better to concentrate on supporting intrinsic motivation and to build learning activities that people will actually want to do - challenges that they feel impelled to solve, supporting social needs, over which they feel some control. For that, the instructivist course format is ill suited to the needs of most. 

Abstract

Online education is an increasingly popular alternative to traditional classroom- based courses. However, completion rates in online courses are often very low. One explanation for poor performance in online courses is that aspects of the online environ- ment lead students to procrastinate, forget about, or be distracted from coursework. To address student time-management issues, I leverage insights from behavioral economics to design three software tools including (1) a commitment device that allows students to pre-commit to time limits on distracting Internet activities, (2) a reminder tool that is triggered by time spent on distracting websites, and (3) a focusing tool that allows students to block distracting sites when they go to the course website. I test the impact of these tools in a large-scale randomized experiment (n=657) conducted in a massive open online course (MOOC) hosted by Stanford University. Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the reminder and focusing treatments are not statistically distinguishable from the control. These results suggest that tools designed to address procrastination can have a significant impact on online student performance. 

Address of the bookmark: http://www.human.cornell.edu/pam/academics/phd/upload/PattersonJMP11_18.pdf

» Assessing teachers’ digital competencies Virtual Canuck

Jon Dron's bookmarks - April 27, 2015 - 13:40

Terry Anderson on an Estonian approach to assessing teacher competences (and other projects) using Elgg - the same framework that underpins the Landing. I've downloaded the tool they have developed, Digimina, and will be trying it out, not just for exactly the purposes it was developed, but as the foundation for a more generalized toolset for sharing the process of assessment. May spark some ideas, I think.

A nice approach to methodology: Terry prefers the development of design principles as the 'ultimate' aim of design-based research (DBR), but I like the notion of software as a hypothesis that is used here. It's essentially a 'sciency' way of describing the notion of trying out an idea to see whether it works that makes no particular claims to generality, but that both derives from and feeds a model of what can be done, what needs to be done, and why it should be done. The generalizable part is not the final stage, but the penultimate stage of design in this DBR model. In this sense, it formalizes the very informal notion of bricolage, capturing some of its iterative nature. It's not quite enough, I think, any more than other models of DBR quite capture the process in all its richness. This is because the activity of formulating that hypothesis itself follows a very similar pattern at a much finer-grained scale to that of the bigger model. When building code, you try out ideas, see where it takes you, and that inspires new ideas through the process of writing as much as of designing and specifying. Shovelling that into a large-scale process model hides where at least an important amount of the innovation actually happens, perhaps over-emphasizing the importance of explicit evaluation phases and underplaying the role of construction itself.

Address of the bookmark: http://terrya.edublogs.org/2015/04/24/assessing-teachers-digital-competencies/