Skip To Content

Technology Enhanced Knowledge Research Institute (TEKRI)

TEKRI blogs

Our trip to Italy – April 2017

Terry Anderson's blog - May 1, 2017 - 19:48
Note: What follows is a 6 page account of the 24 days that Susan and I spent as tourists in Italy in April 2017. Hopefully it can be used by ourselves to recall those names and dates we too easily forget and for others to help plan similar vacations. Introduction: Despite the numerous personal and […]

What the FOLC is new in this article?

Terry Anderson's blog - April 28, 2017 - 01:17
Sorry, but I couldn’t resist spoofing, in the post title,  the unfortunate sound of the acronym for the “new” model proposed in this article. Now,  I’ve got it out of the way and can only suggest that if this “divergent fork of the Community of Inquiry model” is to survive, it needs a new English […]

Two Canadian Movies – Two Canadian Narratives

Terry Anderson's blog - March 5, 2017 - 15:07
I’ve just finished watching two films about, paid for and watched – with interest, by many Canadians. The first film, TransCanada Summer takes the viewer across the country in 1958. Throughout the trip the film celebrates the industrialization and progress resulting from the construction of the Trans Canada Highway – the longest highway in the world at that […]

When cats play sax

Jon Dron's blog - October 29, 2016 - 12:16

This is what happened when Beelzebub the Cat decided to try to play my saxophone after I had foolishly left it on its stand without its mouthpiece cap.

He seriously needs to work on his embouchure.

I seriously need to disinfect my mouthpiece.

Being Human in a Digital Age

elearnspace (George Siemens) - September 2, 2016 - 07:55

I’m exploring what it means to be human in a digital age and what role universities play in developing learners for this experience. Against the backdrop of everything is changing, we aren’t paying enough attention to what we are becoming. The Becoming is the central role of education in a machine learning, artificial intelligence era. It’s great to see people like Michael Wesch exploring the formative aspect of education. Randy Bass’s work on Formation by Design is also notable and important.

I spent a few weeks in Brisbane recently working with the Faculty of Health on digital learning and how to prepare the higher education system for this new reality. On my final presentation, I focused on the needs of learners in this environment and what we need to focus on to help develop their capabilities to be adaptive and respond to continual changes. Slides are below.

Being Human in a Digital Age? from gsiemens

True costs of information technologies

Jon Dron's blog - August 16, 2016 - 17:13

Microsoft unilaterally and quietly changed the spam filtering rules for Athabasca University's O365 email system on Thursday afternoon last week. On Friday morning, among the usual 450 or so spams in my spam folder (up from around 70 per day in the old Zimbra system) were over 50 legitimate emails, including one to warn me that this was happening, claiming that our IT Services department could do nothing about it because it's a vendor problem. Amongst junked emails were all those sent to the allstaff alias (including announcements about our new president), student work submissions, and many personal messages from students, colleagues, and research collaborators.

The misclassified emails continue to arrive, 5 days on.  I have now switched off Microsoft's spam filter and switched to my own, and I have risked opening emails I would never normally glance at, but I have probably missed a few legitimate emails. This is perhaps the worst so far in a long line of 'quirks' in our new O365 system, including persistently recurring issues of messages being bounced for a large number of accounts, and it is not the first caused by filtering systems: many were affected by what seems to be a similar failure in the Clutter filter in May.

I assume that, on average, most other staff at AU have, like me, lost about half an hour per day so far to this one problem. We have around 1350 employees, so that's around 675 hours - 130 working days - being lost every day it continues. This is not counting the inevitable security breaches, support calls, proactive attempts at problem solving, and so on, nor the time for recovery should it ever be fixed, nor the lost trust, lost motivation, the anger, the conversations about it, the people that will give up on it and redirect emails to other places (in breach of regulations and at great risk to privacy and security, but when it's a question of being able to work vs not being able to work, no one could be blamed for that). The hours I have spent writing this might be added to that list, but this happens to relate very closely indeed to my research interests (a great case study and catalyst for refining my thoughts on this), so might be seen as a positive side-effect and, anyway, the vast majority of that time was 'my own': faculty very rarely work normal 7-hour days.

Every single lost minute per person every day equates to the time of around 3 FTEs when you have 1350 employees. When O365 is running normally it costs me around five extra minutes per day, when compared with its predecessor, an ancient Zimbra system.  I am a geek that has gone out of his way to eliminate many of the ill effects: others may suffer more.  It's mostly little stuff: an extra 10-20 seconds to load the email list, an extra 2-3 seconds to send each email, a second or two longer to load them, an extra minute or two to check the unreliable and over-spammed spam folder, etc. But we do such things many times a day. That's not including the time to recover from interruptions to our work, the time to learn to use it, the support requests, the support infrastructure, etc, etc.

To be fair, whether such time is truly 'lost' depends on the task. Those 'lost' seconds may be time to reflect or think of other things. The time is truly lost if we have to put effort into it (e.g. checking spam mail) or if it is filled with annoyance at the slow speed of the machine, but may sometimes simply be used in ways we would not otherwise use it.  I suspect that flittering attention while we wait for software to do its thing creates habits of mind that are both good and bad. We are likely more distracted, find it harder to concentrate for long periods, but we probably also develop different ways of connecting things and different ways of pacing our thinking. It certainly changes us, and more research is needed on how it affects us. Either way, time spent sorting legitimate emails from spam is, at least by most measures of productivity, truly time lost, and we have lost a lot of it.

Feeding the vampires

It goes without saying that, had we been in control of our own email system, none of this would have happened. I have repeatedly warned that putting one of the most central systems of our university into the hands of an external supplier, especially one with a decades-long history of poor software, broken or proprietary standards, weak security, inadequate privacy policies, vicious antagonism to competitors, and a predatory attitude to its users, is a really stupid idea. Microsoft's goal is profit, not user satisfaction: sometimes the two needs coincide, often they do not. Breakages like this are just a small part of the problem. The worst effects are going to be on our capacity to innovate and adapt, though our productivity, engagement and workload will all suffer before the real systemic failures emerge.  Microsoft had to try hard to sell it to us, but does not have to try hard to keep us using it, because we are now well and truly locked in on all sides by proprietary, standards-free tools that we cannot control, cannot replace, cannot properly understand, that change under our feet without warning, that will inevitably insinuate themselves into our working lives. And it's not just email and calendars (that can use only slightly broken standards) but completely opaque standards-free proprietary tools like OneDrive, OneNote and Yammer. Now we have lost standards-compliance and locked ourselves in, we have made it unbelievably difficult to ever change our minds, no matter how awful things get. And they will get more awful, and the costs will escalate. This makes me angry. I love my university and am furious when I see it being destroyed by avoidable idiocy.

O365 is only one system among many similar tools that have been foisted upon us in the last couple of years, most of which are even more awful, if marginally less critical to our survival. They have replaced old, well-tailored, mostly open tools that used to just work: not brilliantly, seldom prettily, but they did the job fast and efficiently so that we didn't have to. Our new systems make us do the work for them. This is the polar opposite of why we use IT systems in the first place, and it all equates to truly lost time, lost motivation, lost creativity, lost opportunity.

From leave reporting to reclaiming expenses to handling research contracts to managing emails, let's be very conservative indeed and say that these new baseline systems just cost us an average of an extra 30 minutes per working day per person on top of what we had before (for me, it is more like an hour, for others, more).  If the average salary of an AU employee is $70,000/year that's $5,400,000 per year in lost productivity. It's much worse than that, though, because the work that we are forced to do as a result is soul-destroying, prescriptive labour, fitting into a dominative system as a cog into a machine. I feel deeply demotivated by this, and that infects all the rest of my work. I sense similar growing disempowerment and frustration amongst most of my colleagues.

And it's not just about the lost time of individuals. Almost always, other people in the system have to play a role that they did not play before (this is about management information systems, not just the digital tools), and there are often many iterations of double-checking and returned forms,  because people tend to be very poor cogs indeed.  For instance, the average time it takes for me to get recompense for expenses is now over 6 months, up from 2-4 weeks before. The time it takes to simply enter a claim alone is up from a few minutes to a few hours, often spread over months, and several other people's time is also taken up by this process. Likewise, leave reporting is up from 2 minutes to at least 20 minutes, usually more, involving a combination of manual emails, tortuous per-hour entry, the ability to ask for and report leave on public holidays and weekends, and a host of other evils. As a supervisor, it is another world of pain: I have lost many hours to this, compounding the 'mistakes' of others with my own (when teaching computing, one of the things I often emphasize is that there is no such thing as user error: while they can make mistakes and do weird stuff we never envisaged, it is our failure to design things right that is the problem). This is not to mention the hours spent learning the new systems, or the effects on productivity, not just in time and motivation, but in preventing us from doing what we are supposed to do at all. I am doing less research, not just because my time is taken with soul-destroying cog-work, but because it is seldom worth the hassle of claiming, or trying to manage projects using badly designed tools that fit better - though not well - in a factory. Worse, it becomes part of the culture, infecting other processes like ethics reviews, student-tutor interactions, and research & development. In an age when most of the world has shaken off the appalling, inhuman, and empirically wrong ideas of Taylorism, we are becoming more and more Taylorist. As McLuhan said, we shape our tools and our tools shape us.

To add injury to insult, these awful things actually cost money to buy and to run -  often a lot more money than they were planned to cost, making a lot less savings or even losses, even in the IT Services department where they are justified because they are supposed to be cutting costs. For instance, O365 cost nearly three times initial estimates on which decisions were based, and it appears that it has not reduced the workload for those having to support it, nor the network traffic going in and out of the university (in fact it may be much worse), all the while costing us far more per year to access than the reliable and fully-featured elderly open source product it replaced. It also breaks a lot more. It is hard to see what we have gained here, though it is easy to see many losses.

Technological debt

The one justification for this suicidal stupidity is that our technological debt - the time taken to maintain, extend, and manage old systems - is unsustainable. So, if we just buy baseline tools without customization, especially if we outsource the entire management role to someone else, we save money because we don't have to do that any more.

This is - with more than due respect - utter bullshit.

Yes, there is a huge investment involved over years whenever we build tools to do our jobs and, yes, if we do not put enough resources into maintaining them then we will crawl to a halt because we are doing nothing but maintenance. Yes, combinatorial complexity and path dependencies mean that the maintenance burden will always continue to rise over time, at a greater-than-linear rate. The more you create, the more you have to maintain, and connections between what we create adds to the complexity. That's the price of having tools that work. That's how systems work. Get over it. That's how all technology evolves, including bureaucratic systems. Increasing complexity is inevitable and relentless in all technological systems, not withstanding the occasional paradigm shift that kind-of starts the ball rolling again. Anyone that had stuck around in an organization long enough to see the long-term effects of their interventions would know this.

These new baseline systems are in no way different, save for one: rather than putting the work into making the machines work for us, we instead have to evolve, maintain and manage processes in which we do the work of machines. The complexity therefore impacts on every single human being that is having to enact the machine, not just developers. This is crazy. Exactly the same work has to be done, with exactly the same degree of precision as that of the machines (actually more, because we have to add procedures to deal with the errors that software is less likely to make). It's just that now it is done by slow, unreliable, fallible, amotivated human beings. For creative or problem-solving work, it would be a good thing to take tasks away from machines that humans should be doing. For mechanistic, process-driven work where human error means it breaks, it is either great madness, great stupidity, or great evil. There are no other options. At a time when our very survival is under threat, I cannot adequately express my deep horror that this is happening.

I suspect that the problem is in a large part due to short-sighted local thinking, which is a commonplace failure in hierarchical systems, and that gets worse the deeper and more divisive the hierarchies go.  We only see our own problems without understanding or caring about where we sit in the broader system. Our IT directors believe that their job is to save money in ITS (the department dealing with IT), rather than to save money for the university. But, not only are they outsourcing our complex IT functions to cloud-based companies (a terrible idea for aforementioned reasons), they are outsourcing the work of information technologies to the rest of the university. The hierarchies mean a) that directors seldom get to see or hear of the trouble it causes, b) they mix mainly with others at or near their hierarchical level who do not see it either, and c) that they tend to see problems in caricature, not as detailed pictures of actual practices. As the hierarchies deepen and separate,  those within a branch communicate less with others in parallel branches or those more than a layer above or below. Messages between layers are, by design, distorted and filtered. The more layers, the greater the distortion. People take further actions based on local knowledge, and their actions affect the whole tree. Hierarchies are particularly awful when coupled with creative work of the sort we do at Athabasca or fields where change is frequent and necessary. They used to work OK for factories that did not vary their output much and where everything was measurable though, in modern factories, that is rarely true any more. For a university, especially one that is online and that thus lacks many of the short circuits found in physical institutions, deepening hierarchies are a recipe for disaster. I suppose that it goes without saying that Athabasca University has, over the past few years, seen a huge deepening in those hierarchies.

True costs

Our university is in serious financial trouble that it would not be in were it not for these systems. Even if we had kept what we had, without upgrading, we would already be many millions of dollars better off, countless thousands of hours would not have been wasted, we would be far more motivated, we would be far more creative, and we would still have some brilliant people that we have lost as a direct result of this process. All of this would be of great benefit to our students and we would be moving forwards, not backwards. We have lost vital capacity to innovate, lost vital time to care about what we are supposed to be doing rather than working out how the machine works. The concept of a university as a machine is not a great one, though there are many technological elements and processes that are needed to make it run. I prefer to think of it like an ecosystem or an organism. As an online university, our ecosystem/body is composed of people and machines (tools, processes, methods, structures, rules, etc). The machinery is just there to support and sustain the people, so they can operate as a learning community and perform their roles in educating, researching and community engagement. The more that we have to be the machines, the less efficiently the machinery will run, and the less human we can all be. It's brutal, ugly, and self-destructive.

When will we learn that the biggest costs of IT are to its end users, not to IT Services? We customized and created the tools that we have replaced for extremely good reasons: to make our university and its systems run better, faster, more efficiently, more effectively. Our ever-growing number of new off-the-shelf and outsourced systems, that take more of our time, intellectual and emotional effort, have wasted and continue to waste countless millions of dollars, not to mention huge costs in lost motivation and ill will, not to mention in loss of creativity and caring. In the process we have lost control of our tools, lost the expertise to run them, lost the capability to innovate in the one field in which we, as an online institution, must and should have most expertise. This is killing us. Technological debt is not voided by replacing custom parts with generic pieces. It is transferred at a usurious rate of interest to those that must replace the lost functionality with human labour.

It won't be easy to reverse this suicidal course, and I would not enjoy being the one tasked with doing so. Those who were involved in implementing these changes might find it hard to believe, because it has taken years and a great deal of pain to do so (and it is far from over yet - the madness continues), but breaking the system was hundreds of times easier than it will be to fix it. The first problem is that the proprietary junk that has been foisted upon us, especially when hosted in the cloud, is a one-way valve for our data, so it will be fiendishly hard to get it back again. Some of it will be in formats that cannot be recovered without some data loss. New ways of working that rely on new tools will have insinuated themselves, and will have to be reversed. There will be plentiful down-time, with all the associated costs. But it's not just about data. From a systems perspective this is a Humpty Dumpty problem. When you break a complex system, from a body to an ecosystem, it is almost impossible to ever restore it to the way it was. There are countless system dependencies and path dependencies, which mean that you cannot simply start replacing pieces and assume that it will all work. The order matters. Lost knowledge cannot be regained - we will need new knowledge. If we do manage to survive this vandalism to our environment, we will have to build afresh, to create a new system, not restore the old. This is going to cost a lot. Which is, of course, exactly as Microsoft and all the other proprietary vendors of our broken tools count upon. They carefully balance the cost of leaving them against what they charge. That's how it works. But we must break free of them because this is deeply, profoundly, and inevitably unsustainable.

Adaptive Learners, Not Adaptive Learning

elearnspace (George Siemens) - July 20, 2016 - 13:00

Some variation of adaptive or personalized learning is rumoured to “disrupt” education in the near future. Adaptive courseware providers have received extensive funding and this emerging marketplace has been referred to as the “holy grail” of education (Jose Ferreira at an EdTech Innovation conference that I hosted in Calgary in 2013). The prospects are tantalizing: each student receiving personal guidance (from software) about what she should learn next and support provided (by the teacher) when warranted. Students, in theory, will learn more effectively and at a pace that matches their knowledge needs, ensuring that everyone masters the main concepts.

The software “learns” from the students and adapts the content to each student. End result? Better learning gains, less time spent on irrelevant content, less time spent on reviewing content that the student already knows, reduced costs, tutor support when needed, and so on. These are important benefits in being able to teach to the back row. While early results are somewhat muted (pdf), universities, foundations, and startups are diving in eagerly to grow the potential of new adaptive/personalized learning approaches.

Today’s technological version of adaptive learning is at least partly an instantiation of Keller’s Personalized System of Instruction. Like the Keller Plan, a weakness of today’s adaptive learning software is the heavy emphasis on content and curriculum. Through ongoing evaluation of learner knowledge levels, the software presents next step or adjacent knowledge that the learner should learn.

Content is the least stable and least valuable part of education. Reports continue to emphasize the automated future of work (pfdf). The skills needed by 2020 are process attributes and not product skills. Process attributes involve being able to work with others, think creatively, self-regulate, set goals, and solve complex challenges. Product skills, in contrast, involve the ability to do a technical skill or perform routine tasks (anything routine is at risk for automation).

This is where adaptive learning fails today: the future of work is about process attributes whereas the focus of adaptive learning is on product skills and low-level memorizable knowledge. I’ll take it a step further: today’s adaptive software robs learners of the development of the key attributes needed for continual learning – metacognitive, goal setting, and self-regulation – because it makes those decisions on behalf of the learner.

Here I’ll turn to a concept that my colleague Dragan Gasevic often emphasizes (we are current writing a paper on this, right Dragan?!): What we need to do today is create adaptive learners rather than adaptive learning. Our software should develop those attributes of learners that are required to function with ambiguity and complexity. The future of work and life requires creativity and innovation, coupled with integrative thinking and an ability to function in a state of continual flux.

Basically, we have to shift education from focusing mainly on the acquisition of knowledge (the central underpinning of most adaptive learning software today) to the development of learner states of being (affect, emotion, self-regulation, goal setting, and so on). Adaptive learners are central to the future of work and society, whereas adaptive learning is more an attempt to make more efficient a system of learning that is no longer needed.

Doctor of Education: Athabasca University

elearnspace (George Siemens) - July 15, 2016 - 06:36

Athabasca University has the benefit of offering one of the first doctor of education programs, fully online, in North America. The program is cohort-based and accepts 12 students annually. I’ve been teaching in the doctorate program for several years (Advanced Research Methods as well as, occasionally, Teaching & Learning in DE) and supervise 8 (?!) doctoral students currently.

Applications for the fall 2017 start are now being accepted with a January 15, 2017 deadline. Just in case you’re looking to get your doctorate . It really is a top program. Terrific faculty and tremendous students.

Digital Learning Research Network Conference 2016

elearnspace (George Siemens) - June 21, 2016 - 09:35

As part of the Digital Learning Research Network, we held our first conference at Stanford last year.

The conference focused on making sense of higher education. The discussions and prsentations addressed many of the critical challenges faced by learners, educators, administrators, and others. The schedule and archive are available here.

This year, we are hosting the 2nd dLRN conference in downtown Fort Worth, October 21-22 The conference call for papers is now open. I’m interested in knowledge that exists in the gaps between domains. For dLRN15, we wanted to socialize/narrativize the scope of change that we face as a field.

The framework of changes can’t be understood through traditional research methods. The narrative builds the house. The research methods and approaches furnish it. Last year we started building the house. This year we are outfitting it through more traditional research methods. Please consider a submission (short, relatively pain free). Hope to see you in Fort Worth, in October!

We have updated our dLRN research website with the current projects and related partners…in case you’d like an overview of the type of research being conducted and that will be presented at #dLRN16. The eight projects we are working on:

1. Collaborative Reflection Activities Using Conversational Agents
2. Onboarding and Outcomes
3. Mindset and Affect in Statistical Courses
4. Online Readiness Modules and Student Success
5. Personal Learning Graphs
6. Supporting Team-Based Learning in MOOCs
7. Utilizing Datasets to Collaboratively Create Interventions
8. Using Learning Analytics to Design Tools for Supporting Academic Success in Higher Education

Announcing: aWEAR Conference: Wearables and Learning

elearnspace (George Siemens) - May 28, 2016 - 09:40

Over the past year, I’ve been whining about how wearable technologies will have a bigger impact on how we learn, communicate, and function as a society than mobile devices have had to date. Fitness trackers, smart clothing, VR, heart rate monitors, and other devices hold promising potential in helping understand our learning and our health. They also hold potential for misuse (I don’t know the details behind this, but the connection between affective states with nudges for product purchases is troubling).

Over the past six months, we’ve been working on pulling together a conference to evaluate, highlight, explore, and engage with prominent trends in wearable technologies in the educational process. The“>aWEAR conference will be held Nov 14-15 at Stanford. The call for participation is now open. Short abstracts, 500 words, are due by July 31, 2016. We are soliciting conceptual, technological, research, and implementation papers. If you have questions or are interested in sponsoring or supporting the conference, please send me an email

From the site:

The rapid development of mobile phones has contributed to increasingly personal engagement with our technology. Building on the success of mobile, wearables (watches, smart clothing, clinical-grade bands, fitness trackers, VR) are the next generation of technologies offering not only new communication opportunities, but more importantly, new ways to understand ourselves, our health, our learning, and personal and organizational knowledge development.

Wearables hold promise to greatly improve personal learning and the performance of teams and collaborative knowledge building through advanced data collection. For example, predictive models and learner profiles currently use log and clickstream data. Wearables capture a range of physiological and contextual data that can increase the sophistication of those models and improve learner self-awareness, regulation, and performance.

When combined with existing data such as social media and learning management systems, sophisticated awareness of individual and collaborative activity can be obtained. Wearables are developing quickly, including hardware such as fitness trackers, clothing, earbuds, contact lens and software, notably for integration of data sets and analysis.

The 2016 aWEAR conference is the first international wearables in learning and education conference. It will be held at Stanford University and provide researchers and attendees with an overview of how these tools are being developed, deployed, and researched. Attendees will have opportunities to engage with different wearable technologies, explore various data collection practices, and evaluate case studies where wearables have been deployed.

What does it mean to be human in a digital age?

elearnspace (George Siemens) - May 22, 2016 - 16:53

It has been about 30 months now since I took on the role to lead the LINK Research Lab at UTA. (I have retained a cross appointment with Athabasca University and continue to teach and supervise doctoral students there).

It has taken a few years to get fully up and running – hardly surprising. I’ve heard explanations that a lab takes at least three years to move from creation to research identification to data collection to analysis to publication. This post summarizes some of our current research and other activities in the lab.

We, as a lab, have had a busy few years in terms of events. We’ve hosted numerous conferences and workshops and engaged in (too) many research talks and conference presentations. We’ve also grown significantly – from an early staff base of four people to expected twenty three within a few months. Most of these are doctoral or post doctoral students and we have a terrific core of administrative and support staff.

Finding our Identity

In trying to find our identity and focus our efforts, we’ve engaged in numerous activities including book clubs, writing retreats, innovation planning meetings, long slack/email exchanges, and a few testy conversations. We’ve brought in well over 20 established academics and passionate advocates as speakers to help us shape our mission/vision/goals. Members of our team have attended conferences globally, on topics as far ranging as economics, psychology, neuroscience, data science, mindfulness, and education. We’ve engaged with state, national, and international agencies, corporations, as well as the leadership of grant funding agencies and major foundations. Overall, an incredible period of learning as well as deepening existing relationships and building new ones. I love the intersections of knowledge domains. It’s where all the fun stuff happens.

As with many things in life, the most important things aren’t taught. In the past, I’ve owned businesses that have had an employee base of 100+ personnel. There are some lessons that I learned as a business owner that translate well into running a research lab, but with numerous caveats. Running a lab is an entrepreneurial activity. It’s the equivalent of creating a startup. The intent is to identify a key opportunity and then, driven by personal values and passion, meaningfully enact that opportunity through publications, grants, research projects, and collaborative networks. Success, rather than being measured in profits and VC funds, is measured by impact with the proxies being research funds and artifacts (papers, presentations, conferences, workshops). I find it odd when I hear about the need for universities to be more entrepreneurial as the lab culture is essentially a startup environment.

Early stages of establishing a lab are chaotic. Who are we? What do we care about? How do we intersect with the university? With external partners? What are our values? What is the future that we are trying to create through research? Who can we partner with? It took us a long time to identify our key research areas and our over-arching research mandate. We settled on these four areas: new knowledge processes, success for all learners, the future of employment, and new knowledge institutions. While technologies are often touted as equalizers that change the existing power structure by giving everyone a voice, the reality is different. In our society today, a degree is needed to get a job. In the USA, degrees are prohibitively expensive to many learners and the result is a type of poverty lock-in that essentially guarantees growing inequality. While it’s painful to think about, I expect a future of greater racial violence, public protests, and radicalized politicians and religious leaders and institutions. Essentially the economic makeup of our society is one where higher education now prevents, rather than enables, improving one’s lot in life.

What does it mean to be human in a digital age?

Last year, we settled on a defining question: What does it mean to be human in a digital age? So much of the discussion in society today is founded in a fetish to talk about change. The narrative in media is one of “look what’s changing”. Rarely is the surface level assessment explored to begin looking at “what are we becoming?”. It’s clear that there is much that is changing today: technology, religious upheaval, radicalization, social/ethnic/gender tensions, climate, and emerging super powers. It is an exciting and a terrifying time. The greatest generation created the most selfish generation. Public debt, failing social and health systems, and an eroding social fabric suggest humanity is entering a conflicted era of both turmoil and promise.

We can better heal than any other generation. We can also better kill, now from the comfort of a console. Globally, less people live in poverty than ever before. But income inequality is also approaching historical levels. This inequality will explode as automated technologies provide the wealthiest with a means to use capital without needing to pay for human labour. Technology is becoming a destroyer, not enabler, of jobs. The consequences to society will be enormous, reflective of the “spine of the implicit social contract” being snapped due to economic upheaval. The effects of uncertainty, anxiety, and fear are now being felt politically as reasonably sane electorates turn to solutionism founded in desire rather than reality (Middle East, Austria, Trump in the US to highlight only a few).

In this milieu of social, technology, and economic transitions, I’m interested in understanding our humanity and what we are becoming. It is more than technology alone. While I often rant about this through the perspective of educational technology, the challenge has a scope that requires thinking integratively and across boundaries. It’s impossible to explore intractable problems meaningfully through many of the traditional research approaches where the emphasis is on reducing to variables and trying to identify interactions. Instead, a complex and connected view of both the problem space and the research space is required. Trying to explore phenomena through single variable relationships is not going to be effective in planning

Complex and connected explorations are often seen to be too grandiose. As a result, it takes time for individuals to see the value of integrative, connected, and complex answers to problems that also possess those attributes. Too many researchers are accustomed to working only within their lab or institutions. Coupled with the sound-bite narrative in media, sustained and nuanced exploration of complex social challenges seems almost unattainable. At LINK we’ve been actively trying to distribute research much like content and teaching has become distributed. For example, we have doctoral and post-doctoral students at Stanford, Columbia, and U of Edinburgh. Like teaching, learning, and living, knowledge is also networked and the walls of research need the same thinning that is happening to many classrooms. Learning to think in networks is critical and it takes time, especially for established academics and administrators. What I am most proud of with LINK is the progress we have made in modelling and enacting complex approaches to apprehending complex problems.

In the process of this work, we’ve had many successes, detailed below, but we’ve also encountered failures. I’m comfortable with that. Any attempt to innovate will produce failure. At LINK, we tried creating a grant writing network with faculty identified by deans. That bombed. We’ve put in hundreds of hours writing grants. Many of which were not funded. We were involved in a Texas state liberal arts consortium. That didn’t work so well. We’ve cancelled workshops because they didn’t find the resonance we were expecting. And hosted conferences that didn’t work out so well financially. Each failure though, produced valuable insight in sharpening our focus as a lab. While the first few years were primarily marked by exploration and expansion, we are now narrowing and focusing on those things that are most important to our central emphasis on understanding being human in a digital age.

Grants and Projects

It’s been hectic. And productive. And fun. It has required a growing team of exceptionally talented people – we’ll update bios and images on our site in the near future, but for now I want to emphasize the contributions of many members of LINK. It’s certainly not a solo task. Here’s what we’ve been doing:

1. Digital Learning Research Network. This $1.6m grant (Gates Foundation) best reflects my thinking on knowing at intersections and addressing complex problems through complex and nuanced solutions. Our goal here is to create research teams with R1 and state systems and to identify the most urgent research needs in helping under-represented students succeed.

2. Inspark Education. This $5.2m grant (Gates Foundation) involves multiple partners. LINK is researching the support system and adaptive feedback models required to help students become successful in studying science. The platform and model is the inspiration of the good people at Smart Sparrow (also the PIs) and the BEST Network (medical education) in Australia and the Habworlds project at ASU.

3. Intel Education. This grant ($120k annually) funds several post doctoral students and evaluates effectiveness of adaptive learning as well as the research evidence that supports algorithms that drive adaptive learning.

4. Language in conflict. This project is being conducted with several universities in Israel and looks at how legacy conflict is reflected in current discourse. The goal is to create a model for discourse that enables boundary crossing. Currently, the pilot involves dialogue in highly contentious settings (Israeli and Palestinian students) and builds dialogue models in order to reduce legacy dialogue on impacting current understanding. Sadly, I believe this work will have growing relevance in the US as race discourse continues to polarize rather than build shared spaces of understanding and respect.

5. Educational Discourse Research. This NSF grant ($254k) is conducted together with University of Michigan. The project is concerned with evaluating the current state of discourse research and to determine where this research is trending and what is needed to support this community.

6. Big Data: Collaborative Research. This NSF grant ($1.6m), together with CMU, evaluates the impact of how different architectures of knowledge spaces impacts how individuals interact with one another and build knowledge. We are looking at spaces like wikipedia, moocs, and stack overflow. Space drives knowledge production, even (or especially) when that space is digital.

7. aWEAR Project. This project will evaluate the use of wearables and technologies that collect physiological data as learners learn and live life. We’ll provide more information on this soon, in particular a conference that we are organizing at Stanford on this in November.

8. Predictive models for anticipating K-12 challenges. We are working with several school systems in Texas to share data and model challenges related to school violence, drop out, failure, and related emotional and social challenges. This project is still early stages, but holds promise in moving the mindset from one of addressing problems after they have occurred to one of creating positive, developmental, and supportive skillsets with learners and teachers.

9. A large initiative at University of Texas Arlington is the formation of a new department called University Analytics (UA). This department is lead by Prof Pete Smith and is a sister organization to LINK. UA will be the central data and learning analytics department at UTA. SIS, LMS, graduate attributes, employment, etc. will be analyzed by UA. The integration between UA and LINK is one of improving the practice-research-back to practice pipeline. Collaborations with SAS, Civitas, and other vendors are ongoing and will provide important research opportunities for LINK.

10. Personal Learning/Knowledge Graphs and Learner profiles. PLeG is about understanding learners and giving them control over their profiles and their learning history. We’ve made progress on this over the past year, but are still not at a point to release a “prototype” of PLeG for others to test/engage with.

11. Additional projects:
- InterLab – a distributed research lab, we’ll announce more about this in a few weeks.
- CIRTL – teaching in STEM disciplines
- Coh-Metrix – improving usability of the language analysis tool

Going forward

I know I’ve missed several projects, but at least the above list provides an overview of what we’ve been doing. Our focus going forward is very much on the social and affective attributes of being human in our technological age.

Human history is marked by periods of explosive growth in knowledge. Alexandria, the Academy, the printing press, the scientific method, industrial revolution, knowledge classification systems, and so on. The rumoured robotics era seems to be at our doorstep. We are the last generation that will be smarter than our technology. Work will be very different in the future. The prospect of mass unemployment due to automation is real. Technology is changing faster than we can evolve individually and faster than we can re-organize socially. Our future lies not in our intelligence but in our being.


Sometimes when I let myself get a bit optimistic, I’m encouraged by the prospect of what can become of humanity when our lives aren’t defined by work. Perhaps this generation of technology will have the interesting effect of making us more human. Perhaps the next explosion of innovation will be a return to art, culture, music. Perhaps a more compassionate, kinder, and peaceful human being will emerge. At minimum, what it means to be human in a digital age has not been set in stone. The stunning scope of change before us provides a rare window to remake what it means to be human. The only approach that I can envision that will help us to understand our humanness in a technological age is one that recognizes nuance, complexity, and connectedness and that attempts to match solution to problem based on the intractability of the phenomena before us.

The Godfather: Gardner Campbell

elearnspace (George Siemens) - May 18, 2016 - 13:52

Gardner Campbell looms large in educational technology. People who have met him in person know what I mean. He is brilliant. Compassionate. Passionate. And a rare visionary. He gives more than he takes in interactions with people. And he is years ahead of where technology deployment current exists in classrooms and universities.

He is also a quiet innovator. Typically, his ideas are adopted by other brash, attention seeking, or self-serving individuals. Go behind the bravado and you’ll clearly see the Godfather: Gardner Campbell.

Gardner was an originator of what eventually became the DIY/edupunk movement. Unfortunately, his influence is rarely acknowledged.

He is also the vision behind personal domains for learners. I recall a presentation that Gardner did about 6 or 7 years ago where he talked about the idea of a cpanel for each student. Again, his vision has been appropriated by others with greater self-promotion instincts. Behind the scenes, however, you’ll see him as the intellectual originator.

Several years ago, when Gardner took on a new role at VCU, he was rightly applauded in a press release:

Gardner’s exceptional background in innovative teaching and learning strategies will ensure that the critical work of University College in preparing VCU students to succeed in their academic endeavors will continue and advance…Gardner has also been an acknowledged leader in the theory and practice of online teaching and education innovation in the digital age

And small wonder that VCU holds him in such high regard. Have a look at this talk:

Recently I heard some unsettling news about position changes at VCU relating to Gardner’s work. In true higher education fashion, very little information is forthcoming. If anyone has updates to share, anonymous comments are accepted on this post.

There are not many true innovators in our field. There are many who adopt ideas of others and popularize them. But there are only a few genuinely original people doing important and critically consequential work: Ben Werdmuller, Audrey Watters, Stephen Downes, and Mike Caulfield. Gardner is part of this small group of true innovators. It is upsetting that the people who do the most important work – rather than those with the loudest and greatest self-promotional voice – are often not acknowledged. Does a system like VCU lack awareness of the depth and scope of change in the higher education sector? Is their appetite for change and innovation mainly a surface level media narrative?

Leadership in universities has a responsibility to research and explore innovation. If we don’t do it, we lose the narrative to consulting and VC firms. If we don’t treat the university as an object of research, an increasingly unknown phenomena that requires structured exploration, we essentially give up our ability to contribute to and control our fate. Instead of the best and brightest shaping our identity, the best marketers and most colourful personalities will shape it. We need to ensure that the true originators are recognized and promoted so that when narrow and short-sighted leaders make decisions, we can at least point them to those who are capable of lighting a path.

Thanks for your work and for being who you are Gardner.

Hello world!

Connectivism blog (George Siemens) - March 8, 2016 - 07:11

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

December 31, 1969 - 16:00