Skip To Content

Technology Enhanced Knowledge Research Institute (TEKRI)

TEKRI blogs

Just about North Country Fair time.

Terry Anderson's blog - June 18, 2018 - 07:14
I am getting pretty exited about the summer solstice and the 40th ANNUAL North Country Fair. The North Country Fair is celebrated along the shores of the Lesser Slave Lake – the largest lake in Alberta. I moved there in 1971 with 6 dear friends and a $600 investment to be a “one seventh equal and […]

New Book from AUPress – An Online Doctorate for Researching Professionals

Terry Anderson's blog - June 10, 2018 - 12:33
I was pleased to receive in the post a hard copy of a new book in the Issues in Distance Education book series, for which I continue to serve as the series editor. Now of course you can read all of the books in this series as they are available for download  under Creative Commons […]

Qualitative Research Rebooted 2018

Terry Anderson's blog - May 3, 2018 - 10:14
For the past two months, I’ve been occupied with a qualitative study of teachers’ use of digital technology in Alberta Schools. The study is sponsored by the Alberta Teachers’ Association.  It has been very useful for me to get down to actually doing a full scale qualitative study after years of teaching grad students research […]

Tim Berners-Lee: we must regulate tech firms to prevent ‘weaponised’ web

Jon Dron's blog - March 13, 2018 - 14:57

TBL is rightfully indignant and concerned about the fact that “what was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms.” The Web, according to Berners-Lee, is at great risk of degenerating into a few big versions of Compuserve or AOL sucking up most of the bandwidth of the Internet, and most of the attention of its inhabitants. In an open letter, he outlines the dangers of putting so much power into hands that either see it as a burden, or who actively exploit it for evil.

I really really hate Facebook more than most, because it aggressively seeks to destroy all that is good about the Web, and it is ruthlessly efficient at doing so, regardless of the human costs. Yes, let’s kill that in any way that we can, because it is actually and actively evil, and shows no sign of getting any nicer. I am somewhat less concerned that Google gets 87% of all online searches (notwithstanding the very real dangers of a single set of algorithms shaping what we find), because most of Google’s goals are well aligned with those of the Web. The more openly people share and link, the better it gets, and the more money Google makes. It is very much in Google’s interest to support an open, highly distributed, highly connected Web, and the company is as keen as everyone else to avoid the dangers of falsehoods, bias, and the spread of hatred (which are among the very things that Facebook feeds upon), and, thanks to its strong market position and careful hiring practices, it is more capable of doing so than pretty much anyone else. Google rightly hates Facebook (and others of its ilk) not just because it is a competitor, but because it removes things from the open Web, probably spreads lies more easily than truths, and so reduces Google’s value.

I am somewhat bothered that the top 100 sites (according to WIkipedia, based on Alexa and SimilarWeb results) probably get far more traffic than the next few thousand put together, and that the long tail pretty much flattens to approximately zero after that. However, that’s an inevitable consequence of the design of the Web (it’s a scale-free network subject to power laws), and ‘approximately zero’ may actually translate to hundreds of thousands or even millions of people, so it’s not quite the skewed mess that it seems. It is, as TBL observes, very disturbing that big companies with big pockets purchase potential competitors and stifle innovation, and I agree that (like all monopolies) they should be regulated, but there’s no way they are ever going to get everything or everyone, at least without the help of politicians and evil legislation, because it’s a really long tail.

It is also very interesting that even the top 10 – according to just about all the systems that measure such things – includes the unequivocally admirable and open Wikipedia itself, and also Reddit which, though now straying from its fully open model, remains excellently social and open. In different ways, both give more than they take.

It is also worth noting that there are many different ways to calculate rank. (based on the Mozscape web index of 31 Billion domains and 165 Billion pages) has a very different view of things, for instance, in which Facebook doesn’t even make it to the domains listing, and is way below WordPress and several others in the popular pages list, which is a direct result of it being a closed and greedy system. Quantcast’s perspective is somewhat different again, albeit only focused on US sites which are a small but significant portion of the whole.

Most significantly, and to reiterate the point because it is worth making, the long tail is very long indeed. Regardless of the dangers of a handful of gigantic platforms casting their ugly shadows over the landscape, I am extremely heartened by the fact that, now, over 30% of all websites run on WordPress, which is both open source and very close to the distributed ideal that TBL espouses, allowing individuals and small communities to stake their claims, make a space, and link (profusely) with one another, without lock-in, central control, or inhibition of any kind. That 30% puts any one of the big monoliths, including Facebook, very far into the shade. And, though WordPress’s nearest competitor (Joomla, also open source) accounts for a ‘mere’ 3% of all websites, there are hundreds if not thousands of similar systems, not to mention a huge number of pages (50% of the total, according to W3Techs) that people still roll for themselves.

Yes, the greedy monoliths are extremely dangerous and should, where possible, be avoided, and it is certainly worth looking into ways of regulating their activities, nationally and internationally, as many governments are already doing and should continue to do so. We must ever be vigilant. But the Web continues to grow, and to diversify regardless of their pernicious influence because it is far bigger than all of them put together.

Address of the bookmark:

Originally posted at:

More on Distance Education Journal Rankings

Terry Anderson's blog - March 8, 2018 - 10:54
Both academics and administrators love to argue about the value (impact) of their academic work.  The old adage of “Publish or Perish” still has currency. Despite the many distribution opportunities besides and beyond publishing in scholarly journals, the bean counters (myself included) love citation indexes. The basic idea is that the more your work is […]

Facebook has a Big Tobacco Problem

Jon Dron's blog - February 18, 2018 - 12:08

A perceptive article listing some of Facebook’s evils and suggesting an analogy between the tactics used by Big Tobacco and those used by the company. I think there are a few significant differences. Big Tobacco is not one company bent on profit no matter what the cost. Big tobacco largely stopped claiming it was doing good quite a long time ago. And Big Tobacco only kills and maims people’s bodies. Facebook is aiming for the soul. The rest is just collateral damage.

Address of the bookmark:

Originally posted at:

Facebook’s days may be numbered as UK youth abandon the platform

Jon Dron's blog - February 14, 2018 - 21:10

The end of Facebook couldn’t come soon enough, but we’ve been reading headlines not unlike this for around a decade, yet still its malignant tumour in the lungs of the Web grows, sucking the air out of all good things.

Despite losses in the youth market (not only in the UK), as the article notes, Facebook has deep pockets and is metastasizing at a frightening rate. Instagram and WhatsApp are only the most prominent recent growths, and no doubt far from the last. Also, the main tumour itself is still evolving, backed by development funding that staggers belief. It would take a lot to cure us of this awful thing. On the optimistic side, however, Metcalfe’s Law works just as well in reverse as going forward. Networks can grow exponentially, but they can shrink just as fast. Perhaps these small losses will be the start of a cascade. Let’s hope so.



Address of the bookmark:

Originally posted at:

Turns out the STEM ‘gender gap’ isn’t a gap at all

Jon Dron's blog - January 3, 2018 - 23:46

At least in Ontario, it seems that there are about as many women as men taking STEM programs at undergraduate level. This represents a smaller percentage of women taking STEM subjects overall because there are way more women entering university in the first place. A more interesting reading of this, therefore, is not that we have a problem attracting women to science, technology, engineering, and mathematics, but that we have a problem attracting men to the humanities, social sciences, and the liberal arts. As the article puts it:

“it’s not that women aren’t interested in STEM; it’s that men aren’t interested in poetry—or languages or philosophy or art or all the other non-STEM subjects.”

That’s a serious problem.

As someone with qualifications in both (incredibly broad) areas, and interests in many sub-areas of each,  I find the arbitrary separation between them to be ludicrous, leading to no end of idiocy at both extremes, and little opportunity for cross-fertilization in the middle. It bothers me greatly that technology subjects like computing or architecture should be bundled with sciences like biology or physics, but not with social sciences or arts, which are way more relevant and appropriate to the activities of most computer professionals. In fact, it bothers me that we feel the need to separate out large fields like this at all. Everyone plays lip service to cross-disciplinary work but, when we try to take that seriously and cross the big boundaries, there is so much polarization between the science and arts communities that they usually don’t even understand one another, let alone work in harmony. We don’t just need more men in the liberal arts – we need more scientists, engineers, and technologists to cross those boundaries, whatever their gender. And, vice versa, we need more liberal artists (that sounds odd, but I have no better term) and social scientists in the sciences and, especially, in technology.

But it’s also a problem of category errors in the other direction. This clumping together of the whole of STEM conceals the fact that in some subjects – computing, say – there actually is a massive gender imbalance (including in Ontario), no matter how you mess with the statistics. This is what happens when you try to use averages to talk about specifics: it conceals far more than it reveals.

I wish I knew how to change that imbalance in my own designated field of computing, an area that I deliberately chose precisely because it cuts across almost every other field and did not limit me to doing one kind of thing. I do arts, science, social science, humanities, and more, thanks to working with machines that cross virtually every boundary.

I suspect that fixing the problem has little to do with marketing our programs better, nor with any such surface efforts that focus on the symptoms rather than the cause. A better solution is to accept and to celebrate the fact that the field of computing is much broader and vastly more interesting than the tiny subset of it that can be described as computer science, and to build up from there. It’s especially annoying that the problem exists at Athabasca where a wise decision was made long ago not to offer a computer science program. We have computing and information systems programs, but not any programs in computer science. Unfortunately, thanks to a combination of lazy media and computing profs (suffering from science envy) that promulgate the nonsense, even good friends of mine that should know better sometimes describe me as a computer scientist (I am emphatically not), and even some of our own staff think of what we do as computer science. To change that perception means not just a change in nomenclature, but a change in how and what we, at least in Athabasca, teach. For example, we might mindfully adopt an approach that contextualizes computing around projects and applications, rather than its theory and mechanics. We might design a program that doesn’t just lump together a bunch of disconnected courses and call it a minor but that, in each course (if courses are even needed), actively crosses boundaries – to see how code relates to poetry, how art can inform and be informed by software, how understanding how people behave can be used in designing better systems, how learning is changed by the tools we create, and so on.

We don’t need disciplines any more, especially not in a technology field. We need connections. We don’t need to change our image. We need to change our reality. I’m finding that to be quite a difficult challenge right now.


Address of the bookmark:

Originally posted at:

Learning as Artifact Creation

elearnspace (George Siemens) - September 14, 2017 - 13:04

Digitization is deceptive in that the deep impact isn’t readily observable. Remember when MOOCs were going to transform higher education? Or when personalized learning was going to do away with instructors? Going back about a century ago, audio, then video, was also going to disrupt education. All of these trends have been window dressing – a facade more reflective of the interests of those who advocate for them rather than a substantive departure from established norms.

Yet, change is happening, often under the radar of enthusiasts because it’s harder to sell a technology product or draw clicks to a website when being nuanced and contextual. Education is an idea/information-based process. How information is accessed, created, and shared is revealing about the future of learning. Essentially, follow information in order to understand the future of higher education. Today, information is networked and digital. University transformation and proposed innovation should align to this reality to have a broad impact – notably on student learning and the development of knowledge in society.

In 2004, I tried to respond to the network/digitization alpha trend by describing the new actions and experiences that were available to learners: Connectivism: A Learning Theory for the Digital Age. This article and work with Stephen Downes formed subsequent development of MOOCs and learning analytics.

Connectivism was presented as a theory that described how learning happened in networks, complex ambiguous information spaces, digital environments, and the increased opportunities of the participative web. Unfortunately, much of that theory remains undeveloped. Details regarding cognitive processes, teacher actions, learner mindsets, design models, and social interaction remain rudimentary. I’m confident that these will be developed over time, but progress has been slow. As a result, connectivism is something people cite rather than engage and develop into a more complex theory or framework of learning. But, whether connectivism or by some other name, a social networked model of learning is our future.

Enter the artifact…

One aspect of connectivism that has great potential for development is the role of the artifact in learning. With CCK08, we found fascinating activities arising due to student created artifacts. One student creates an image to detail the architecture of the course. Another updates it and adds to it. Another comes by and critiques it. The artifact serves as a social learning object. This process reflects my earlier point: big trends unfold behind the scenes over time and in education, they map and mirror to what people do with information that is digital and networked.

Here’s an example: Education over the past several centuries has been defined by the centrality of the instructor and the actions of a learner in relationship to what the instructor knows. There have always been voices that challenged this model – Dewey, Illich, Freire, Montessori – but the system of learning that defines our society is modeled on the assumption of learners needing to duplicate what instructors already know. Learning artifacts – a paper or a test – were largely held between the instructor and student. Group work and class presentations brought others into the relationship, but the message was still clear: the instructor and the content were central, all else was held in their orbit.

Then the internet happened. And later the web. Small groups of people could share without a mediator. You didn’t need a publisher to blast your thoughts to a bulletin board. Yahoo groups, Friendster, and other early social software didn’t fully live up to the vision of the mesh, but they enabled communication. Content creation was still largely the domain of experts or people in positions of control. Britannica and newspapers were still gatekeepers. Then, the late 1990′s rolled around and Blogger made self-publishing reasonably accessible to anyone.

We could now create artifacts, not only talk about them.

A stunning period web innovation occurred between 2000-2005: delicious, myspace, many blog platforms, flickr, wikis, etc. The gates were opened and everyone was a content creator and everything was subject to user creation. Everything was a possible social artifact. Take and share a picture. Post your thoughts on a blog. Tag and share valuable resources. The web had its velveteen rabbit moment and became real to people who had previously been unable to easy share their creative artifacts. Eventually we were blessed with the ugly stepchildren of this movement (Twitter, Facebook) that enabled flow of creative artifacts but in themselves where not primarily generative technologies.

Educationally, this provided new opportunities for students. That class lecture that didn’t make sense? There is a better resource online. That stats textbook that is confusing? There is a MOOC for that. Don’t like a class? Tell the web. Don’t like your instructor? Tell the web (rate my prof). Have an important thought to share? Upload a video to youtube. An awesome song? Upload. Share. A terrific painting you’ve been working on? Upload. Share.

Consider the impact of these opportunities on education and how poorly the higher education system has responded. Consider our curriculum as a self-contained coherent resource. The goal of education? Teach this container to the students. What happens when you add artifact creation? The entire curriculum can shift. If I lecture on the development of open learning and open source technologies, I’m presenting my voice, my priorities, my values. If someone comes along and says “what about the power structure and the bias that underpins this content”? Bam. It’s a new course. Someone creates a video reacting to a lecture I delivered? Bam. It’s a new course. This doesn’t always happen on grand scales. Often the artifact has a limited impact – a brief detour in a new conversation and learning direction for students. The aggregate of these artifacts is significant because it places students in a new mindset, one defined by personal autonomy and agency.

All of this is obvious. It’s mainly about the permissions that technology enables: namely, to write ourselves and our values into any curriculum and learning interaction. The impact that this has on the learning experience is not well understood. We have theories of community (community of inquiry, community of practice). We have many theories of content and content interaction (including transactional distance). There is something about the artifact that is unique in its ability to make every learner a teacher, every contribution a redirection of learning, every interaction a reaction and augmentation.

In one of our LINK grants we currently exploring the power of artifacts as redirective entities (an NSF grant titled COCOA). How does creating a blog post, video, or meme contribute to enlarging the curriculum? How do artifacts contribute to, and take away from, the course content? What is a well-designed artifact? What causes resonance between learner resources being shared and students that respond to those resources?

Learning Analytics Courses

elearnspace (George Siemens) - September 11, 2017 - 11:48

After about a year of planning, we can finally announce the following courses on edX focusing on learning analytics. The intent of these courses is to eventually lead into a MicroMasters and then advance placement in an in-development Masters of Science in Learning Analytics at UTA. Each course runs about three weeks and we’ve tried to settle on prominent analytics tools for educational data so the experience is one where skills can immediately be applied.

We’ve structured these courses to provide an intro to analytics in education (a good compliment to the courses is our recently released Handbook of Learning Analytics – free download):

  • Learning Analytics Fundamentals
  • Social Network Analysis
  • Cluster Analysis
  • Predictive Modeling in Learning Analytics
  • Feature Engineering for Improving Learning Environments
  • Connecting Learning Data to Improve Instructional Design
  • Natural Language Processing and Natural Language Understanding in Educational Research
  • Knowledge Inference and Structure Discovery for Education
  • Multi-modal Learning Analytics
  • We have exceptional instructors – world leaders in the field. We are, however, well aware of the gender imbalance. We had five faculty (women) who ended up dropping out due to existing commitments. If you’d like to help right this imbalance, email me and let me know courses or topics that you’d like to instruct in the LA domain.

    Open Education: Please give

    elearnspace (George Siemens) - September 8, 2017 - 08:44

    In about a month, David Wiley and I are teaching this course on edX: Introduction to Open Education. As we are both firm adherents to social and participatory pedagogical models (i.e. we like it when others do our work), we need some help. Specifically, we’d love to have faculty/researchers/practitioners provide short 3-5 minute reflections on one or more of the following topics:

    Week 1: Why Open Matters
    Week 2: Copyright, The Public Domain, and The Commons
    Week 3: The 5R Activities and the Creative Commons Licenses
    Week 4: Creating, Finding and Using OER
    Week 5: Research on the Impact of OER Adoption
    Week 6: The Next Battles for Openness: Data, Algorithms, and Competency Mapping

    The process:
    1. Create a short video/tutorial or any other artifact (if we have yours by Sept 14, we’ll include it in the course) responding to any of the above weekly topics
    2. Upload your creation to some site where we can access/download it
    3. Email me a link (gsiemens, gmail) or share on Twitter using #openedmooc or leave a link in the comments
    4. Sit back and enjoy the feeling of accomplishment that will wash over you knowing you’ve made the world a better place.

    New Project: Digitizing Higher Education

    elearnspace (George Siemens) - June 8, 2017 - 11:51

    In fall, I’ll be running a course on edX with a few colleagues on Digitizing Higher Education. This course is part of a larger initiative that I’ll be rolling out later this month focused on helping universities transition into digital systems: University Networks.

    Here’s the pitch:

    Higher education faces tremendous change pressure and the resulting structures that are now being formed will alter the role of universities in society for the next several generations. The best time to change systems is when it is already experiencing change. A growing number of consulting agencies and service providers are starting to enter the higher education space, bringing visions that are not tightly focused on learner development and service to knowledge advancement in research domains – i.e. a shift to utilitarian views of education. I’m concerned that in the process, universities will lose control over their enterprise and will become some version of corporate lite.

    I recognize that universities need to change. They need to start with a basic question: If we were to create a model of higher education today that serves the needs of learners and society, what would it look like given our networked and technologically infused society? . The answer is not pre-existing. It’s something that we need to explore together. Societies and regions that make this change will benefit from increased employment opportunities for citizens, higher quality of life, and greater control over their future.

    The project, University Networks, involves working with a small number of universities, or specific faculties and departments, that are committed to rethinking and redesigning how they operate. My goal is to bring on 30 universities and over a period of 4 years, rethink and redesign university operations to align with the modern information and knowledge ecosystem. The intent is to impact 1 million learners over the next four years through offering innovative teaching and learning opportunities, utilizing effective learning analytics models, integrating learning across all spaces of life, and creating a digital and networked mindset to organization operations.

    A few details:

    • This is a cohort model where universities learn from each other and share those resources and practices that can be shared – for example, shared curriculum and shared quality rubrics. The cohort model enables more rapid change since the investments of all universities in the network will increase the value of the resources for everyone.
    • We provide centralized consultancy (this is a non-profit) where we enter a university for two weeks of in-depth analysis of existing practices and work with leadership to plan future investments and goals. Once this analysis is done, each university will enter one of ten modules based on their current progress. For example, a university without an LMS will enter module one whereas a university with advanced infrastructure but looking to develop online programs will enter at module four.
    • The shared consultancy and cohort model results in universities working with a fraction of the investment needed in working with a traditional corporation or consultancy firm. Clearly enabling partners will be needed and we’ll support and advise in that area as well. Our focus, however, is on rapid innovation owned and controlled by the university.
    • My motivation for this is twofold: 1. to serve the advancement of science through modern universities that reflect the information age and the changing economy. 2. to actively research systemic transformation in higher education.
    • As partners in university innovation, we (through Interlab) have deep expertise in machine learning, systemic innovation, networked learning, online learning, and digitization of organizations. More on our group here: What does this mean? Basically that we are committed to repositioning higher education for the modern era and that we have the skillsets to deliver on that commitment.
    • If you are interested in learning more, please email me: contact me. We are hosting an information event on June 30. We’ll provide more information at that time about the project, getting involved, and our expectation of university partners.

      We have an excellent advisory board directing this project:

    • John Galvin (Intel)
      Dror Ben-Naim (Smart Sparrow)
      Katy Borner (Indiana University)
      Al Essa (McGraw-Hill)
      Casey Green (Campus Computing Project)
      Sally Johnstone (NCHEMS)
      Mark Milliron (Civitas)
      Catherine Ngugi (Open Education Africa)
      Deborah Quazzo (GSV Advisors)
      Matt Sigelman (Burning Glass)

    Handbook of Learning Analytics (open)

    elearnspace (George Siemens) - June 7, 2017 - 08:23

    When we started the learning analytics conference in 2011, we aligned with ACM. We received a fair bit of criticism for not pursuing fully open proceedings. Some came from our sister organization, IEDMS, that has open proceedings. We made a difficult choice to go with the traditional route of quality, indexed proceedings, largely in order to ensure that colleagues from Europe and Latin America could receive funds for their travels. It’s often not understood by advocates for openness that a key challenge for researchers is to publish for impact or publish for prestige. Prestige, as defined by so called “reputable” journals, is often a requirement for getting government funding for travel.

    To ensure broader dissemination, and cope with our guilt, of our research, we set up an open journal: Journal for Learning Analytics.

    I’m very excited about a new project that started as an idea during LAK13 in Leuven and is another commitment to openness by the Society for Learning Analytics Research: The Handbook of Learning Analytics. The book, CC-licensed, weighs in at 356 pages and provides a good snapshot of the status of learning analytics as a field. It’s a free download (both the book and the chapters). Given the number of masters programs that now incorporate learning analytics courses, or a growing number of LA masters programs, we felt it was important to get a research document into the public space.

    Hello world!

    Connectivism blog (George Siemens) - March 8, 2016 - 08:11

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!