Skip To Content

Technology Enhanced Knowledge Research Institute (TEKRI)

TEKRI blogs

White elephants and other e-readers

Jon Dron's blog - July 23, 2016 - 09:27

When I get new devices I tend to make notes about them: it's part of my tinkering approach to research, a way to explore the edges of the adjacent possible. Most of the notes don't get read by anyone else. This often seems like a bit of a waste so, having had a couple of days of vacation (and thus mostly doing the work I felt like doing rather than the work I had to do) this post is an assemblage of notes about a few of the devices I have acquired over the past year or so, at least partially to support my thinking on e-readers (though I cover more features of the devices in my notes).

I am very interested in e-reading because I do a great deal of it, and it is the primary means by which most online learners learn. There's a fair bit of existing research into e-reading, but the vast majority of it fails to distinguish between desktop PCs, laptop PCs, dedicated e-readers, tablets and cellphones, let alone between different software tools and configurations. This is silly. It's equivalent to generically comparing e-learning and p-learning which, as we all should now know, is a completely spurious thing to do.'Tain't what you do, it's the way that you do it. It is particularly interesting that, though there are a few variations in paper books - size, font, hard/paperback, etc - the variation is not even close to that found in e-reading hardware and software, and we have barely begun to innovate in this area yet. To do so, it is useful to understand the benefits and weaknesses of existing tools. These notes are part of that process.

The devices I will discuss here are:

  • Kindle Voyage (high-end e-reader)
  • Sony DPT-S1 (A4-size e-paper e-reader)
  • Lenovo Yoga Tab 3 (Android tablet with built-in projector)
  • Google Cardboard (generic VR viewer)
  • Pebble Time Steel (smartwatch)
  • iPad Pro and Apple Pencil (needs no introduction)

Amazon Kindle Voyage

I got this device because I wanted to know what makes something a top-of-the-line e-reader. The Kindle Voyage, though heavily criticized for its price, had (at the time I got it) pretty much swept the board in comparative reviews, coming top in almost all of them. This is therefore my reference point.

The Kindle Voyage is very small: the (6 inch) page is smaller than the average paperback book, especially the slightly larger formats used mainly for academic books.  Whether this is a good or bad thing depends a lot on the book. For text, I find that is good enough but, for diagrams, tables and images, it can be too small.

The monochrome e-ink screen is bright and very clear, with better resolution than many laser printers. It has a non-reflective etching that I have tried in bright sunlight and found to be extremely easy on the eye, with virtually no reflections unless you deliberately angle it at the sun. It is not quite paper, but extremely close to it and, in many ways, is superior to read from: flatness and consistency are mostly a positive thing, albeit that the curve of a paper page provides cues about location in a book and helps one to remember a page's unique shape. It has very even backlighting that gently glows, and dims according to the level of background lighting - this is great, though I'd like it more if it had the option to tint it with red light - the blue-ish glow is not great last thing at night, when I tend to read the most. Battery life, even when backlit, is very good: the claimed 6 weeks of life assumes only half an hour of reading a day, which is way less than I’d normally do, but that still equates to a good 20 hours between charges in real life which, for something so tiny, is good. It appears to take a couple of hours to fully charge on a typical USB connection.

The device is very thin and very light - it feels much lighter than the average smartphone and far lighter than a small paperback - with a nice rubbery grippable back, and intelligently positioned ‘buttons’ on both sides of the screen, so it works well in either hand. The ‘buttons’ are actually pressure sensitive areas: pressing them gives a reassuring and very gentle haptic buzz when they are squeezed. After only 10-15 minutes of pressing them this can lead to finger cramp, however, so it is good that it is also possible to swipe across or up and down a page, in a manner that is quite familiar to phone and tablet users. There are two smaller back 'buttons' above the main page flippers, that are quite hard to reach with one hand. There’s an on/off button on the rear of the device, just out of reach of even my long-ish fingers. This is good - it is hard to turn it off accidentally. The bevel is not huge, but is about the right size to make it easy to hold without touching the screen, about the size of a normal book margin.

Performance is notably better than that of any other e-ink devices I have used, with screen refreshing that is fast and that seldom, and barely perceptibly, flashes (a generic issue with e-ink, that starts to burn in if not zapped occasionally with a reverse image). For reading, I find page turns fast enough not to interrupt my flow of reading at all. Much faster than flipping pages in a p-book although, as my weak eyes mean that I like to have a larger font, I tend to do this more often.

It has a web browser, but it's awful. Soft buttons for the keyboard and tools are often quite unresponsive. Especially annoying is the lag and difficulty finding the right place to press for punctuation such as the @ symbol and period. Once you move on to pages that need scrolling it is very jerky, with multiple refreshes, and extremely slow responses to things like pinching to zoom, which is distracting to the point of making it virtually unusable for many pages: few are optimized for e-readers. Lack of colour also becomes a serious issue on such pages. That is also a noticeable problem when scrolling through my catalogue of books or the Kindle store (also available directly from the device), because many book covers blur into a grey mass: this is a surprising failure on the part of Amazon who, you might think, ought to be doing its best to sell books to you. If you cannot differentiate between them or even see their titles, there is not a lot of point. I still mostly need to get my books via a tablet, phone or PC. It is at least nice to be able to browse books on archive.org and download them (in the correct format) to the device.

On the subject of the book catalogue, the interface to it is tedious. I have hundreds of books that I like to browse, not simply search for, and it can take several minutes to scroll painstakingly through them. There are options for tagging and cataloguing books but, with a large existing catalogue, this is not a simple option. This is many times worse than even a disorganized pile of books, let alone proper bookshelves. The fact that you can search (and search for text within books) is a notable benefit, but the loss of random browsing is a serious disadvantage.

Whispersync works very well: it's very easy to pick up on one device what was left off on another. I very much like the ‘free’ 3G connection that works in most countries and that allows books to be downloaded (and purchased) from almost anywhere in the world, without the need for wifi, but I deeply hate the fact that a fair number of my books are limited by DRM to a few devices. As a researcher into such technologies, I have a great many versions of the Kindle app on many devices, so I often hit these limits, then have to work out on which machines to disable reading (is it mac 1, or mac 4 that I am actually using? Very hard to tell).  In fact, I deeply hate DRM, period. It is not fiendishly hard to convert and transfer non-DRM’d books from other devices but I find the fact that Amazon insist all should be in its proprietary format or PDF (not a good thing on a 6” screen) to be intensely annoying. Given that DRM is perfectly possible in the otherwise ubiquitous epub format, this is an annoying constraint.

I was encouraged after getting this device to try a subscription to Kindle Unlimited, which gives (as the name implies) Netflix-like access to over a million titles - an all-you-can-eat rental smorgasbord covering a vast array of subjects and genres, all for $10/month, with up to 10 books at any one time. This has been a disappointing investment so far. The overwhelming majority of the books are those that no one in their right mind would bother paying the typical asking price of between 2 and 6 dollars and would certainly not bother borrowing from a library. The majority are self-published, and some are scams that are not even meant to be read - they are just a means to leech a bit of money from Amazon, filled with nonsense. Within the area of science I found a great many books that are anything but scientific, with a preponderance of rubbish folk psychology, ’10 things’ books, and right-of-Hitler religious nutcases trying to disprove evolution and climate change. In fiction, there’s a lot of genre novellas and novels of the fan-fiction variety, most of which seem to be of extremely low quality and imagination. Very disappointing, though I have found a copious catalogue of Kurt Vonnegut books, many of which I have not read, so am happy enough for now. There are certainly some gems to be found but the effort of doing so is great, and none of those that I actually sought out have been there so far. The device does allow you to set up a library account to borrow books from your local library. I have not tried this yet, but find the idea appealing. You can, of course, do this on any device, but the convenience is worth having, especially given the complete lack of network charges.

Is it worth the money? I’d say not. Amazon’s own much cheaper alternative, with a very similar screen, the Paperwhite, is a little thicker, lacks the buzzing buttons and adaptive backlight, and is slightly slower, but these are not big enough differences to be worth $100. My only other notable e-ink device till this point was a tiny and now slightly elderly Kobo with a 4” screen. Apart from size and backlight, there is not too much to choose between them.  Yes, the Voyage has a notably better screen, but not so much that it is worth nearly $200 more (the Kobo cost me less than $40) and bigger is certainly better, but not $200 better. The software on the Kobo is, I would say, mostly a bit nicer, but essentially extremely similar. Its native epub format is way friendlier, with far more books available without the need for conversion, albeit with less wonderful sync between devices. The main differentiator is the book stores behind them - Amazon’s catalogue is vastly much bigger and better. Vastly much. Though both can be used with books from elsewhere, as both are tightly integrated with their respective bookstores, this matters.

For all its weaknesses, the Voyage is a device that I have found myself using for at least an hour every day. It's a great way to read books, especially fiction. It very rarely needs charging, sits unobtrusively by my bed, and just works. The interface virtually disappears, and there are no interruptions to your reading from a dumb device that thinks it needs a place in every part of your life. It is so light that you barely notice it in your hand - so much easier than a paper book. And I love the adaptive backlighting. Though it would be easy to dim and brighten the screen manually (as in the Paperwhite), the unobtrusive automatic dimming is surprisingly pleasant.

Amazon now has an even higher end device, the Oasis, that is a little lighter, has an extra boost for the battery in the cover, an ergonomic grip, and more LEDs for even more even backlight. Apart from that, it is hard to see why it would be worth getting: everything else is much the same. The Voyage is already too expensive, especially given how much Amazon will leech from you after purchase, so I cannot imagine why one would spend another $100 for a leather cover with a battery in it.

https://www.amazon.ca/High-Resolution-Display-Adaptive-PagePress-Sensors/dp/B00IOY524S

Sony DPT-S1 e-reader

The DPT-S1 is an e-paper device that does pretty much only one thing - it lets you read and annotate PDF documents. True, it does have a note taking app that is quite serviceable and a web browser that is not at all serviceable but, basically, this is a very expensive one-trick pony that cannot even read standard ebook formats. How expensive? Over $1000 expensive. You could get a good iPad Pro for that money, or 4 or 5 Kindles. Or a pretty good PC laptop or tablet, or even a top of the line Chromebook. Or a nice bicycle. All in all, this is one incredibly expensive device that does very little.

So why did I want one? Well, obviously enough, it's for that one thing. I get to read a great many documents, many of which are already in PDF format and most of which can easily be made so. The reading area of the DPT-S1 is effectively the same as a standard sheet of office paper and, in theory at least, provides a very similar experience, with similar resolution and contrast to a slightly greyish printed sheet, and similar ability to mark up the text. As far as I know, this is the only commercially available e-paper device with a screen this size.

One of the notable ways that p-reading is normally better than e-reading is that it provides a consistent, fixed visual layout. It is better because the shape of text on a page is important in helping us to remember where we read it and in what context, and human typesetters generally pay closer attention than machines to making pages readable and appealing. Most e-book formats re-flow the text according to device, font, etc, so there are few cues of this nature. It is true that, especially  when making text larger for those with aged eyes, this is an advantage in many ways, but the loss of visual memory of the shape of the page is a cognitive trade off.  PDFs are much more like print in this regard as the format is fixed, albeit that it remains difficult to get a sense of the context of the page in the broader text. On a small device, though, PDFs are usually unreadable, or require absurd amounts of scrolling, so a device that lets you see the whole page at its native size is a very interesting idea. Could this be a step towards what we need to replace paper for reading? Well...

The size of the DPT-S1 is great. The contrast is not quite as good as black ink on a white sheet of paper: blacks are not very black, and 'whites' are definitely grey. It's not even close to the Kindle Voyage, but letters appear quite sharp and clear, and A4/Letter sized documents are very easy to read. Without the glow of most modern screens, it is relatively restful on the eyes. It is extremely light: it feels like a thick sheet of cardboard in the hand, lighter than even 20-30 pages of good printed paper, let alone the thousands of books it can carry. It is really easy to hold in one hand for prolonged periods. The screen is very readable in bright sunlight, and it is acceptable in a reasonably well-lit room. It has no backlight, though, so it is not much use in darker rooms. The battery life is fine - around 15 to 20 hours. You can certainly use it for a whole day without the need to recharge it. Recharging, through a standard micro USB plug, takes a while but you can use it at the same time or recharge it overnight.  So far, so good. After that, though, it goes rapidly downhill.

The software is truly awful. My most important intended use for the device was to make marking of student work and reviewing of papers and books, etc, easier. Alas, it does not. The first big problem is actually getting documents onto it. The built-in (and atrociously unusable) web browser does not recognize PDFs from Moodle or Office 365 email as known file types. It does support WebDAV, but it only allows a single webDAV server to be configured. The wifi is as primitive as it gets, and far from reliable. Worse, the device unaccountably wipes out any files you have saved should you choose to go through the incredibly slow and tortuous process of changing the WebDAV server, using the highly unresponsive and annoying keyboard (there are always characters that are at least 3 keyboards away from the default) but I have found that so unreliable and slow (often it fails to connect because it takes so long to set up a simple, single, wifi connection, and it is very fussy about which webDAV variants it will support) that there is very little point, even for a single WebDAV server. I have found that it can use CloudApp via the web browser, which is fine for the odd one-off file, albeit that it can easily take 5 minutes to enter the URL and get the file. I could set up my PC as a WebDAV server but part of the point of this is to untether me from it and, if I'm going to be around it anyway, I might as well plug it in. The only sensible way to add files is thus to download the work onto my PC and transfer it from there via a USB cable. This is extremely clunky: it can easily take 5 minutes simply to get a file, once you factor in saving from wherever it is in the first place (e.g. Moodle, email, review sites). Though it does have a micro-SD card, the hassle of unmounting then remounting it is not worth the bother, especially as it demands removal of a small back-panel to get at the thing. Without even a means to upload files via the web browser, it is even worse trying to get it back again afer annotation: USB or SD card are the only plausible ways, notwithstanding the awful WebDAV implementation. This is far too clunky: the whole point is to streamline the process, not to make it more difficult. I can see that it might be OK if I had bulk documents to download and upload but that's not how I normally work, nor how I wish to work.

The next problem is that navigation through texts is tortuous. I thought it would be great for reading and annotating a book that I am reviewing, but that turns out not to be the case. Unlike most e-readers, you cannot simply jump to references and back again. In fact, even skipping to the back of the book to look them up is incredibly tedious. As far as I can tell, you cannot even flip to the index, or the back, or the front. Switching to thumbnail view sounds promising, but actually means you lose your place as the current page is somewhere in the middle and not highlighted. Compared with the Kindle's quite neat x-ray and other browsing tools, this feels like something from the Middle Ages. Even reading is less than perfect: it blanks the screen way too often (reversing black and white to clear the memory effect of e-paper screens) and takes too long to return. Book-length texts take an age to load.

After some time using it, a few other issues have arisen that make it even less useful. I had been using it as a simple way to record notes such as my daily to-do list. However, every now and then - like every couple of days or so - it needs a reboot, as it loses track of what’s in any file. All appear as blank. Sony are showing no signs of wanting to maintain this buggy firmware, and (though some have found complex ways to replace the customized Android operating system with their own) it is well locked down to prevent customization. Not that there is much to customize: it lacks even sound input or output, so cannot even do text-to-speech, let alone anything more useful. In fact, even some PDFs get mangled by it.

It feels very cheaply made: the buttons (3 standard Android buttons) are clicky, imprecise and toy-like. The body is made of flimsy plastic that bends a bit. Nothing quite fits. Its lightness means that it slips easily - indeed, it slid off my desk, making a soft landing on the floor about a metre below. Not a great thing to do, for sure, but, given that it is such a light device made of resilient plastic, I would not have expected much damage. However, the ugly fabric pen holder snapped off - apparently it is only lightly glued in place. This is shoddy. The screen itself gets messy very quickly which, given that the point is to write on it, seems a design flaw. It also picks up scratches easily. It comes with a cheap and ugly cover, but that virtually doubles the apparent weight and makes it far less comfortable to use. The dedicated cheap plastic pen is easy to lose, and there is a very small but perceptible lag between writing and the appearance of your writing on the screen. It doesn't feel quite like using a pen - the screen is too slippery and, oddly, also scratchy at the same time. I like the instant erase button on the pen, but it is too easy to press it by mistake. The fine plastic point looks flimsy: I doubt it will last long. Replacements designed for graphics tablets should work, but it doesn't inspire confidence.

Overall, this should be a very promising device, but it fails to even do the one thing it is supposed to do at all well. I would love a better thought-through device with this screen format, or the same thing at a tenth of the price but, right now, this is one to avoid like the plague.

https://pro.sony.com/bbsc/ssr/product-DPTS1/

Lenovo Yoga Tab 3 Pro 10

The Lenovo Yoga Tab 3 Pro 10 is mostly a fairly conventional 10" Android tablet with one significant twist: it has an integrated pico projector. It’s remarkably hard to find the specs for the projector, but I would guess it must be about 50 lumens and perhaps 800 x 600 resolution, or maybe a little higher.

The device runs Android 5.1 with only a few slightly annoying differences from the stock version. I fail to understandand why almost all manufacturers insist on doing this: while the projector does mean it needs a few small tweaks, there's no good reason to mess with the rest.  It has the usual range of sensors, expandable memory (by microSD card), and a stonkingly big 10,200 mAh fast charging battery from which it is claimed one can get 18 hours of use. I think that’s an exaggeration: you’d need some gentle apps, low screen brightness, and no wifi to get anything like that but, in normal use, with web browsing, email, Kindle, a bit of streaming video and light use of the projector, I have easily got well over 12 hours, which is not bad at all. Unfortunately, being Android, it keeps eating power when you are doing nothing with it so, unlike an iPad (that could be left for weeks and still have power) it will die when left alone for a few days, unless you turn it off completely (in which case it will last well over a month). That being said, you could certainly watch three or four movies before needing to recharge. The price paid for this is, unsurprisingly, in extra weight and bulk, but that is mostly taken up by a side handle that is fairly comfortable to hold and that also contains the projector, speakers, rear camera and a really well designed built-in prop that also doubles as a means to hang it on a hook on the wall. All in all, though you are aware of the weight, it is well balanced and sits comfortably in the hand. It feels solid and well engineered. The other stand-out features are a full Quad HD screen that is at least as nice as that on the iPad Pro (though much smaller), and Dolby sound from four JBL speakers that are remarkably good at spatial stereo - it seems that the sound spreads from a far wider area than the device itself. It would be better with 3G/4G, but wifi is available most places so I can live without that, though I do miss fingerprint authentication: passcodes and gestures are not at all as convenient. One quite nice feature is that you can use anything conductive as a stylus, including a steel pen or even a pencil. Also unusual in a tablet is the inclusion of a buzzer. It is also splashproof to IP21 (ie. it can cope with condensation and light showers) which brings it a little closer to a p-book in resilience.

Unfortunately, even more than most Android devices, it is flaky. Apps crash very regularly, have more bugs than their iOS counterparts, pause for no obvious reason, and the whole thing feels very unresponsive most of the time. Given a quad core Atom CPU running at 1.4GHz and 2GB of RAM, this is quite surprising. It is partly a generic Android thing, but I think Lenovo have made it worse: I get nothing like these problems on other Android devices, even on those with lower specs. I have tried very hard to love Android over many years, because I approve of its (general) relative openness and its flexibility, but I always feel a sense of profound relief coming back to my far better and smoother iOS devices. It's the same trade-off as that between Windows or Linux and Macs: you can pick flexibility but sub-optimal flakiness, or something that works really well but that limits your choices. Even ignoring Apple's superior hardware and operating system, such inequality is inevitable: developers can test on pretty much all Apple devices, but even the biggest developers cannot hope to do so on the tens of thousands of Android machines. iOS simply works better, but good luck to anyone wanting a built-in projector or waterproofing on their iPad (though it can be done).
 
One of the main reasons I got this was to try e-reading at gigantic size. Using a projector is an interesting alternative to book-like e-readers that has not been researched much, if at all. It does work for this, up to a point. With normal room lighting the projected image is pretty bright up to about 30 inches, but decays rapidly after that. In darkness, I reckon it is pretty good at 90 inches or more. Colours are clear, the image is sharp. It does a very good job of showing anything on the screen, has intuitive controls, and is pretty smart at automatically adjusting the keystone and focus as you move the device around. The focal length is not as wide/adjustable as I’d like - you have to be some way from the wall or ceiling to get a decent picture, far further than for my dedicated pico projector, but I guess that is so that you can sit on a sofa in a normal sized room to control it. Unfortunately the focus is not wonderfully even, so the corners and edges are a bit blurred. However, with a reasonably large font, it is perfectly possible to read it. I have yet to figure out whether it is possible to display it vertically, though, for easier page reading: it seems no one in the design team considered the possibility that someone might want to do that, and it does not flip like the standard screen: pages are therefore always horizontal, whatever their original orientation. The dimness and relatively poor resolution mean that it is not great for looking at details, which is one thing I hoped might be a strength, especially for books with pictures and diagrams. If the projector had the same resolution as the screen itself, you could easily display multiple pages and read them, but that's not going to happen. Another thing that I had not really thought too deeply about until I tried is is that many of the same problems with reading on a laptop or desktop machine remain: the fixed distance between reader and book is really bad for reading, and not at all comfortable after a little while, though I have enjoyed lying in bed reading a book from the ceiling (albeit that the ceiling texture can make a bit difference to legibility). However, another thing that should have been obvious to me is that, to control it, you need to be holding the device. This means two very bad things happen. First, and most annoyingly, it wobbles. This is incredibly bad for reading. Secondly, it divorces the page turning and highlighting action from the reading surface. It’s actually surprisingly difficult to coordinate hand movements on a tablet when you are not getting direct visual feedback, far worse than, say, using a graphics tablet on a conventional PC. So, though it is kind of nice to be able to read a book with a partner in bed (not something you'd want to do all the time), the projector is by no means a great e-reading device. The standard 10" screen itself is perfectly usable for reading, if a bit too shiny, and it has the widescreen aspect ratio favoured by most Android tablets, which is good for movies but not so much for reading books. The resolution of the built in screen is very good indeed, but that is not unusual nowadays. 

My second use case was to explore its potential as a more social device than most personal machines. Tablets tend to be more social objects than cellphones or PCs anyway - it's one of the key things that makes them a different product category. Tablets are things that get passed around, peered over together, talked about and used in a (physical) social context. People do use phones that way but only because of the convenience and availability of the devices: they are too small and too personal (texts etc keep popping up) to be really useful in that context. A device with a projector ought to be more interesting. While the projector’s limitations mean that it can’t be used outside or in a brightly lit room, and it’s not much use unless you can find a blank bit of wall to display onto (surprisingly absent in most public social venues like pubs and cafes), in the right physical context it becomes a shared object, a catalyst for conversation and conviviality, and a means to engage with one another. It seems especially useful for things like short YouTube videos, photos, and so on. Having the device on the table when there is a gathering of friends and family means that when (as someone usually does) people refer to something they have seen online, everyone can share the experience as a group, not as separate individuals one or two at a time. This changes the meaning of the activity quite considerably. It notably blurs the online/physical space. For a full TV show or movie, I would almost always prefer to make an event of it and gather round a TV or proper projector than use this inevitably makeshift device. However, it has already proved useful, even in that context. We had a family movie night the other day, with large projector screen, and the PC driving it dropped its Netflix connection, refusing to return. As we only had a few minutes left of the movie, it took all of 2 minutes to switch on the device, aim it at the screen, and pick up where we left off. Another thing that is very appealing about it is that it needs no wires at all - just set it down and play. I've not yet had a chance to use it in another similar setting, with two or more collocated groups working at a distance. I suspect that it might be quite effective when using webmeeting or Skype-like software, though the inability to pan the selfie camera might negate the benefits.

Overall, I am quite pleased with this: if it were my only tablet, it would do most, if not all, of what I need a tablet for, and it is not bad for the price, even without the projector, closely comparable to a similar iPad.  If the software and hardware combination were more reliable, more consistent and responsive in performance, and less rough at the edges, it might be a very good competitor to my iPad Air 2, but it just isn't. They are mostly small irritations but there are many of them, from buttons that take a second to respond to random crashes to simple flakiness and inconsistency in software design. Taken together, they make the whole experience profoundly unsatisfying. The device does not disappear as it should. I like the prop, I like the battery life, I like the projector, even though its e-reading uses are limited. It strikes me that there's room in the market for an iPad accessory that includes integrated projector, battery, maybe speakers and prop.  It would be easy enough to implement and much handier to add it when needed than to have such things all the time when, mostly, they are not needed.

http://shop.lenovo.com/ca/en/tablets/lenovo/yoga-tablet-series/yoga-tab-3-pro-10/

Google Cardboard

The Google Cardboard viewer I purchased is one of the hundreds of cheap generic plastic VR headsets into which one slots a smartphone. It comes with a small, generic, bluetooth controller that is supposed to allow you to control the phone, that works very poorly and intermittently on an Android device, and is virtually useless for an iPhone, though technically supported.

It is quite scary, at first, to place one's big, expensive phone into this flimsy plastic container and dangle it a few feet above the ground. However, the phone is gripped well and seems in no danger of falling. The device is not very comfortable to wear, especially over a prolonged period, especially if you have a prominent nose. I find the rubbery eye mask to be hot and awkward after a little while, and the elasticated bands begins to be noticeable after a short time. With a big phone, it pulls forwards on your face. Virtual reality is still an uncomfortable place. You look really stupid wearing it.

The software needs a lot of work. The best I have managed so far with it is to look around in a few virtual worlds. With its dreadful controller, it is really hard to even click a hovering button, and the disconnect between the heads-up display and the crappy controls is huge. It might be fun to try this with a circa 1992 data glove, or the super-smart HTC Vive controllers, but that would rather negate the point of a wire-free VR device. This is no Vive or Oculus - not by a long chalk. It's about the same kind of experience as early 2000s VR, without the wires. The field of view is quite small, the resolution is not great, the movement is jerky and obvious, even on a fast iPhone. It was not notably worse on an old Nexus 4, and Moto G, so I think this is more down to software than hardware.

Once you have exhausted the possibilities of the demo apps that Google provides, it is actually quite tricky to find decent apps for it. It's not that no apps are available. They are just not very good. It is hard to set them up, many just don't do anything, and virtually none are properly supported by the bluetooth controller. I would have expected the potential for augmented reality to be a selling point, as the camera is deliberately uncovered. Not so much. Most apps don't use it at all.

As an e-reader, it is hopeless. Though an Phone 6+ has plenty of definition and is about as big as what the case will hold, all of that is lost when viewed through cheap plastic lenses, and the slight differences in viewing angle from each eye make it quite dizzying to read even large text (without the stereo). Oculus or HTC Vive does this sort of thing quite well, but at such a high price (in every way) it is an absurd idea to even try it. For all such things, the fact that you have to exclude the entire outside world in order to use them makes these deeply anti-social devices. Perhaps the Magic Leap will provide a better answer, as it has both high resolution and integration with the real world. It would be cool to mimic a bookshelf in AR, and would not be a terrible way to read text. Some of the videos - https://www.youtube.com/channel/UC2E1x3l45YUO2eOhRv-A7lw - are amazing. However, it appears not to be too portable. Something that gave both the portability of the Google Cardboard box and the power of a Magic Leap might be well worth having. There are plenty of suitable desktop variants for such devices.

Overall, this particular device is a badly conceived toy: it is difficult to use, limited, uncomfortable and flaky. Fun to play with for a few minutes, but not good for anything.

https://www.amazon.ca/Virtual-Reality-Headset-Controller-Smartphones/dp/B019NBVJII/

Pebble Time Steel

Most mainstream smart watches (the Apples and Androids) have glowing screens with battery lives of a day or two at best, and almost all the rest seem to be focused on golf players, runners, or people that want really basic email and phone alerts. The Pebble, however, hits a sweet spot: a claimed battery life of a week or more, a big app ecosystem, an always-on e-paper (not e-ink) LCD screen, and a few sensors to make it useful. I started with an original monochrome Pebble (a fabulous bargain at $79 - less than many fitness trackers alone, and a really good watch) but, after a couple of weeks, passed it on, after realizing that I am too rough on my watches for a plastic screen. So now I have what was, at the time of purchase, the top-of-the-line, colour, voice-recognizing, Gorilla-Glassed Pebble Time Steel ($200). This is on the verge of being superseded by the Pebble Time 2, with a larger viewing screen, keeping the same size for the watch, which is a good thing: the usable screen (it has a big bevel) is too small. I can at least read the time without glasses, though some of the apps use text and images that are too small to read unassisted. I have never come close to the claimed battery life of 10 days: mostly I manage 6-7 days, though I have not run it into the ground so may be misjudging its staying power. However, that's fine: I just have to take it off for a couple of hours once a week to charge it with its magnetic charger. It does warn you about a day ahead of when it is going to die, so there's usually time to charge it before it goes completely.

Instead of a touch screen favoured by most smartwatches, the Pebble has just four buttons, that you can use to control almost anything. They are hopeless for data entry (the calculator apps are all but useless) but they are fine and intuitive for getting around the various menus. The Time Steel has voice recognition, that is used in a few apps, but I don't find it at all accurate and it is weird to talk to a watch, repeating things that it fails to understand over and over. I'm guessing it uses a cloud service to perform the translation itself: a bit bothersome for privacy. I really don't get the Star Trek notion of talking to your computer. Sure, it's fine for quick look-ups but I don't think the evangelists for such things ever looked at how real people actually behave. It's bad enough having to add things to my shopping list in a supermarket, make appointments on a bus, or take notes in a cafe. Can you imagine dictating a report or a paper in a crowded office or Starbucks? Especially when more than a few people are doing it? Even in the family home it is plain weird to hear someone talking into their computer in the next room and, for many things, confidentiality and privacy are serious issues. So, for a vast number of use cases, the only way to use voice recognition is in a soundproof room. Surprisingly, perhaps, computers that you talk to are considerably more anti-social than those that you write to.

Like most such devices, the Pebble relies on a smartphone for much of its functionality, pulling apps and data (such as GPS coordinates, weather, news, etc) from the phone as needed, keeping only the more recently used ones in its cache. Some apps require separate companion apps on the phone (as opposed to just the Pebble app itself) but, as most eat more battery, I have tried to avoid those where I can. I started by installing a lot of apps but soon realized that most were entirely pointless. On the whole, it is easier to reach into your pocket and use the phone app than to navigate through menus on the watch to find the reduced-functionality one that you need. There are a few that I use all the time: the watch itself, notifications, weather, a shopping list, the alarms, a sailing tracker.

I installed O365 and Evernote note reading apps, but never used them after testing that they worked: it's basically a terrible way to read notes.  I quite liked being able to control presentations from my watch until, the first time I tried to use it for a keynote and despite having tested fine beforehand, it didn't work at all. A high stress public talk is a bad place to find that out.

As an e-reader, not unexpectedly, the Pebble leaves a lot to be desired. There are various apps that will read text, RSS feeds, etc, by scrolling at a fixed rate, as well as those that let you painfully scroll through text files but one in particular intrigued me: AFR (a faster reader). This displays words one at a time at a configurable speed, laid out in a way that keeps your focus on a central coloured letter (an implementation of RSVP). It's a strange and disconcerting experience. A standard e-book is bad enough for reducing the contextual information needed to remember what you have read, but AFR decontextualizes every single word, flowing like a video through the text, one word at a time. It cannot read PDFs or DRM'd books (though you can copy your iPhone's clipboard into it) - and it can be a bit complicated to get text into it, despite useful sharing options in iOS. It also requires a companion app in iOS. It chokes on even mildly complex formatting, and the backlight turns off as usual while it is running so, unless you are in brightly lit conditions, it is not easy to read. There is no control over the text size which, on my watch, is difficult to read even with normal reading glasses. It is also pretty buggy, prone to freezing, and, worst of all, the speed of text on the Pebble does not match that on the iPhone (in fact, it often reads as nonsense, skipping words rather than just slowing down, and showing them at a rate of about one per second, which is hopeless). However, as a concept, I think it is quite neat and well worth exploring further.

As a watch, the Pebble Time Steel is great. I love the backlight. I love that it is waterproof. I can live with charging it once a week. I love the alarms. However, like all computers, it crashes occasionally. Not every week, but maybe every month. Relying on the alarm can, therefore, be a bit risky if it really matters. I've experienced one major crash in the past 3 months, which required a device reset. This was annoying, not just for the hassle of having to figure out the weird button combinations needed for the reset, but also because it lost all my settings (including the alarms, that I only realized quite late the next morning). It is also annoying that the app has to be running in the background on the iPhone, and the iPhone doesn't let you force an app to remain running. The first couple of times it stopped were quite confusing, because the watch told me it couldn't communicate with the phone, even though I could see it was connected via bluetooth. I just needed to start the app again. I suspect the Android app might be better in that regard though, unfortunately, you cannot pair the watch with more than one device at a time so I've not tried that. Though it uses a very lightweight bluetooth connection for most of its activities, it does eat more of my phone battery than I'd like: perhaps 10-20% of its capacity per day. This is a nuisance, but my phone still lasts more than a day on the whole (well, at least it did before Pokémon GO) so it is not too bad.

Overall, I like the Pebble Time Steel. It's not going to be my e-reader of choice, but it's a darn good watch. It's neither ugly nor attractive - the strap is actually very tasteful - it's comfortable, it tells the time, it withstands bangs and dunkings, it wakes me up, it provides useful information, it doesn't get in the way, it doesn't need constant care, it's always there. But I doubt that I will keep it for very long. In the past, watches used to last for 10 years or more (my Swiss watch is about 20 years old) but I would be quite surprised if this lasts me even a couple of years. Maybe less - its reliance on a proprietary app and phone is quite worrisome and could fail at any time. Such is the way of modern tech. We do not own the things we buy any more.

https://www.pebble.com/buy-pebble-time-steel-smartwatch

iPad Pro

At first sight the iPad Pro seems like an odd idea. It’s too uncomfortable to hold in one hand, too big to fit in an iPad pocket,  it needs a (non-included) pen to operate its smartest features, and it is really expensive, even compared with a quite high-spec laptop. But the ‘Pro’ nomenclature reveals some of what Apple is aiming for: this is not meant as a device for the masses but is instead for those seeking serious productivity from their device, a different way of engaging with a tablet, beyond media consumption, game playing and simple interaction. Indeed, Apple has gone so far as to claim it can be a laptop replacement, if you add a keyboard.

The device feels huge at first, albeit that it is slim and beautiful to hold. After a few hours, though, using a standard iPad feels cramped and small, and the Pro feels quite normal. You’d not want to hold this in one hand for any length of time, of course. It doesn’t actually weigh noticeably more than the first generation iPad, but the force it exerts on your hand can be considerably greater, unless you get the balance exactly right. That’s actually not too hard, though it does call for a change in approach. I tend to use the device on my lap, or propped up in bed, or in its keyboard case resting on a chair or table. I can walk around with it when I need to, and that’s a world away from walking around with a laptop, and it is vastly much easier to share with other people around you.  I have found that its extraordinarily good video display makes it a far more interesting social device than smaller tablets when sitting around with other people. I am very used to passing a tablet around for my wife, family and friends to look at but, with the iPad Pro, we can all look at the same thing together, sitting on a sofa or at a table. It’s surprisingly superior to the same experience using a laptop too. Perhaps it is the lack of other intrusions, or the cleanness of just a screen and nothing else to interfere with the experience. The iPad largely disappears, leaving only the content it displays. Those that hold it tend to be reluctant to let go again. The battery life is good: 9 or 10 hours seems about normal.

The Pro is great for reading of news sites, letting you see large amounts of individual articles and links to other articles on a page. Much more convenient than a newspaper, but with a similar capability to show not just what you are reading but other stories around it. Oddly, though, it is not as satisfying as I had thought it might be for normal e-books.  In some ways is exacerbates the problem of there being a lot of undifferentiated, non-typeset text, emphasizing the fact that there has been no human involvement in laying it out on screen. However, for books with many diagrams and images, it is a lot better than smaller devices and, for those with particularly bad eyesight, it might be wonderful. For simply reading a long-form linear text, however, the Kindle Voyage wins hands down.

One big hope that I had for this was, like the DPT-S1, to be able to comfortably read - and annotate - PDF files originally designed for print. This has actually worked out pretty well - far, far better than the rotten DPT-S1. Reading is easy on the eye, immune to most light levels apart from really bright sun or spotlights, and the experience is mostly smooth and slick. The size of the screen means that there is even enough space for apps like Goodreader to show previews of surrounding pages, which helps a lot in getting a sense of where you are in a text (a perennial problem with most existing e-readers), and largely eliminates one of the major cognitive hurdles in reading e-texts, that there is no consistent visual pattern to help you remember what you have read. When I have a lot of work to mark, this is great. It is much lighter and easier to carry than (say) a paper thesis or dissertation, and almost as easy to mark up, though nothing like as light as the DPT-S1. One notable difference between paper and all the software I have used, however, is that it is much harder to flick between multiple pages, to hold two or three open at once from different parts of the manuscript to compare and connect them: it would be so useful to find a good way to replicate this, especially for writing my own books and papers. Though bookmarks help, it is nothing like as fluid or easy as holding a manuscript with fingers on each passage that interests me. I suspect that a desk-sized tablet, with the same retina resolution and an Apple Pencil, might solve this problem, with the right software. Though quite a few  'tablets' with such dimensions do exist, all of those I have seen cannot come close to this resolution, including the ludicrously priced Microsoft Surface Hub. We might have a generation or two to go before that becomes a reality and, by then, heads-up displays will offer a much more cost-effective and flexible alternative. I think perhaps that the main problem is the metaphor of a screen as being a window into virtual space. Windows frame things that should not be framed.

There are numerous annotation-friendly apps for PDFs available, with different strengths and weaknesses. It bugs me, though, that every app maintains its own storage so you cannot seamlessly flit between different apps to take advantage of their different features. This is one of the things that makes iOS secure, but it makes it very annoying to manage documents, even though cloud storage services can reduce that pain a little. Essentially, though, you have to copy documents between one app and the next, rather than simply working on them with whatever you want to use. There is no sense of connection and continuity.  I guess, if I were wise, I would simply use a single moderately good app like Goodreader, with its own storage and copious links to cloud storage for shifting documents around, and leave it at that, but that's not the kind of guy I am, and it doesn't handle all document workflows well, especially with regard to conversion between formats. I want to keep chasing the adjacent possible.

I am far from being a visual artist, but there are many occasions that it would be useful to draw things, create diagrams, design 3D objects for printing or VR, sketch ideas, mockup interfaces, sketch over images, and so on. In dedicated apps, with the Apple Pencil, the iPad mostly feels much like working on paper, with all the additional tools, views, perspectives, layers and wonderful extras that the computer-based environment provides. Very different too from working with a graphics tablet that, because it is separate from the created object, has always felt alienating and awkward to me.  There are also plenty of tools that let you annotate PDFs and images. But I would like to be able to do this kind of thing and seamlessly incorporate it into anything that I am doing - to sketch in a slideshow, or word processor, a book, a paper, or whatever, wherever I am. The notion that documents are of a particular type - not just text, image, diagram, etc but specific formats of such things (Word, Kindle, PNG, etc) - is deep in the genes of even the smartest tablets. We seem trapped in a 1980s timewarp on this. It is even true of formats that ought to support such flexibility like word processors, perhaps because of Microsoft's stranglehold on word processing paradigms that has kept us in a typewriter mindset for decades. Even Apple's own otherwise great Pages is victim to this. At best, you can embed an image or use a separate app within the main application.

After Apple's hype, I wondered whether it might also work as a laptop replacement, and so I got a backlit Logitech keyboard to test this theory out. I have tried this experiment with various different tablets (Apple, Android, and Windows) over the past 6 years or so, but all my efforts so far had been less than wonderful. Fine for a day or two on the road, but not at all close to the laptop experience. The iPad Pro is better, but still not ideal. I'm typing this on the Logitech keyboard and finding it to be at least as comfortable as typing on my MacBook Pro. The screen is incredibly bright and clear.  WIth the enormous Logitech case, though, it is very heavy indeed, perhaps heavier than my MacBook Pro and far less well balanced. The case has only one position - comfortable-ish, but not great. It has a nice set of iPad optimized function keys, though, and works with most Mac keyboard shortcuts, e.g. app switching. I also like that it just works, sipping power from the iPad itself. It is a vast improvement on smaller machines - especially with the dual-window view - and might be OK for a few days at a pinch if I were not doing anything apart from using the Internet, doing a bit of writing, and maybe a bit of presenting. There probably are some people that could use it as a laptop all the time but, as a technologist and computing professional, it is nothing like close enough: the software is simply not sufficiently capable. It's OK at a pinch for remote-controlling the Macbook Pro, but not a lot of fun. I'm enjoying the new iOS Scrivener app, which is very close to the desktop version in power and usability, but it lacks the tight integration with reference libraries of the desktop version, and I use such things a great deal. 

The retina resolution makes a really great second screen that I can use while travelling or on my boat, using the terrific Duet app for a virtually lagless and seamless experience. I have done this with the smaller iPad Air 2 for some time, but it has always been just a little bit of extra real estate, not a seriously useful extra monitor: helpful for, say, viewing incoming email or writing brief notes, but not in the same league as a real second monitor. The iPad Pro can be used for real work - programming, marking, research, etc are a breeze with a big screen attached. Not quite as amazing as my 29” Apple monitor, but good enough for real gains in productivity. Though it has to be tethered to the MacBook Pro for this, it allows you to read papers etc from the much more flexible computer with at least some of the benefits of a tablet.

Another good surprise is the onscreen keyboard, that is not far off complete, with a row of numbers and a good range of punctuation available at all times, and a size that fits my hands well. This only applies to apps that have been optimized for the iPad Pro - there are still quite a few that make use of the more basic and less functional keyboard of the older iPad. In fact, there are even a few iPhone apps that do run on the Pro, albeit not completely full-screen (even with double scaling) which looks entirely weird, like clunky toys. But, when you get the full iPad experience, typing on the screen is far easier and friendlier than in previous models.

One irritation is that quite a few websites decide that I am using an iPad and therefore give me a mobile-optimized - ie. less functional - view. With all that screen real estate it is silly to have a set of buttons etc that are made to work on a cellphone.  Conversely, sites that are designed for desktop use can be fiddly, especially when they disable pinch-to-zoom. Google Books (which could be amazingly useful in this format) is a real pain - its tiny zoom buttons are hard to press and it overrides all the usual controls - especially zoom - that one would normally use to deal with that.

Using it in the sunshine is fine, from a screen perspective, but risky: it gets very very hot very very quickly. The first time I realized it was happening I had to rush inside and train a fan on it because the battery was notably in peril. A case is essential for outdoor summer use. It is not great in the rain either.

The Apple Pencil is a real surprise. I was in two minds whether to get it at all. Seriously, $115 for a pencil that doesn’t even write, and that only works with one device (now two)? I could get two cheap tablets for the price of this thin white stick. To make things worse, though it is quite pretty in its simplicity, this is not a well designed tool. The magnetic lid for the lightning connector is guaranteed to be lost, as is the connector that allows it to be charged using a standard lightning cable. I have no idea where I put the spare rubber tip for when the current one wears down. Apple used to be better than this - I am quite sure Steve Jobs would not have allowed this one out of the door. There’s no way to keep it with your iPad, unless you want to 3D print something to hold it or use sticky tape, and neither the Apple nor the Logitech keyboard cases have any means to attach it, which seems bizarre. It is incredibly easy to lose. When just putting it down, the fact that it is magnetic means it will stick to the side of the iPad, but not strongly enough to hold on when it is tilted. It's not terrible on a flat surface because it is counterweighted a little so that it doesn't roll too much - a nice design touch. On a white tablecloth, though, it is easily missed. It is just difficult enough to find that it becomes an active decision to use it. It would certainly cure the problems of, say, non-zooming screens in web browsers but, unless it is directly to hand, it is too much hassle to go find it when such needs occur. The fast charge option, though, which gives about 30 minutes use on a 15 second charge, is quite smart, and I love the way it disables the touch sensitive screen when it gets close to it, so you can very comfortably rest your hand and trust that you will not suddenly start drawing with knuckles or other parts of your anatomy.

In most apps (not all), lines appear instantly, with no perceptible lag, a great deal of accuracy (Apple make the point that the level of control is tens of times more accurate than even the best passive pens) and a reassuring amount of tactile feedback. I have used other styluses that are best of breed - Pencil (the non-Apple one), for instance - but they don’t come close enough to replicating the experience of drawing on paper. The Apple Pencil does. Drawing with hard rubber on glass is certainly not at all the same thing as writing on paper with pen, pencil, charcoal or brush, but it’s a lot better than a hard-tipped or blobby rubber-tipped stylus, and it is easy to get used to it, especially thanks to the near-instant feedback. I can create extremely small details, with a similar amount of precision as I would get with a ballpoint ben, though maybe a little less than with a proper drawing pen like a Rapidograph or even a fine rollerball. However, that tends to be no big problem because, in most apps, you can zoom in to any level of detail you like. I was quite surprised to find it really easy and natural to, for instance, draw lines using a real ruler, which (on older tablets) is possible but totally weird, and prone to accidental artefacts. You can even trace or draw around things, which is neat for 3D design, especially. There is, though, still a very slight perceptible distance between the drawing surface and the stylus. The glass is extremely thin indeed - a hair’s breadth - but it is still there, and it separates you from the page. It's like the difference between playing a guitar and playing a piano: the feedback is between hand and brain rather than hand and medium. It's not quite direct.

I am enjoying the Apple Pencil more and more. Its precision turns out to be extremely useful at times, allowing manipulations and selection of small objects with ease, and I enjoy writing and sketching with it. It just disappears (sadly, literally sometimes) and it changes the nature of the interaction with the tablet in some very good ways.

Overall, I was not expecting wonders from the iPad Pro - at least compared with the Air 2 - and was totally in sympathy with Steve Jobs's edict to avoid styluses on such things, but I have been very pleasantly surprised. The size of the iPad Pro makes a huge difference for reading (though seldom of e-books) and working in general, the Pencil is really effective for all its design flaws, and this, after my Macbook Pro and iPhone, is one of my favourite and most-used devices.

http://www.apple.com/ca/shop/buy-ipad/ipad-pro

Adaptive Learners, Not Adaptive Learning

elearnspace (George Siemens) - July 20, 2016 - 13:00

Some variation of adaptive or personalized learning is rumoured to “disrupt” education in the near future. Adaptive courseware providers have received extensive funding and this emerging marketplace has been referred to as the “holy grail” of education (Jose Ferreira at an EdTech Innovation conference that I hosted in Calgary in 2013). The prospects are tantalizing: each student receiving personal guidance (from software) about what she should learn next and support provided (by the teacher) when warranted. Students, in theory, will learn more effectively and at a pace that matches their knowledge needs, ensuring that everyone masters the main concepts.

The software “learns” from the students and adapts the content to each student. End result? Better learning gains, less time spent on irrelevant content, less time spent on reviewing content that the student already knows, reduced costs, tutor support when needed, and so on. These are important benefits in being able to teach to the back row. While early results are somewhat muted (pdf), universities, foundations, and startups are diving in eagerly to grow the potential of new adaptive/personalized learning approaches.

Today’s technological version of adaptive learning is at least partly an instantiation of Keller’s Personalized System of Instruction. Like the Keller Plan, a weakness of today’s adaptive learning software is the heavy emphasis on content and curriculum. Through ongoing evaluation of learner knowledge levels, the software presents next step or adjacent knowledge that the learner should learn.

Content is the least stable and least valuable part of education. Reports continue to emphasize the automated future of work (pfdf). The skills needed by 2020 are process attributes and not product skills. Process attributes involve being able to work with others, think creatively, self-regulate, set goals, and solve complex challenges. Product skills, in contrast, involve the ability to do a technical skill or perform routine tasks (anything routine is at risk for automation).

This is where adaptive learning fails today: the future of work is about process attributes whereas the focus of adaptive learning is on product skills and low-level memorizable knowledge. I’ll take it a step further: today’s adaptive software robs learners of the development of the key attributes needed for continual learning – metacognitive, goal setting, and self-regulation – because it makes those decisions on behalf of the learner.

Here I’ll turn to a concept that my colleague Dragan Gasevic often emphasizes (we are current writing a paper on this, right Dragan?!): What we need to do today is create adaptive learners rather than adaptive learning. Our software should develop those attributes of learners that are required to function with ambiguity and complexity. The future of work and life requires creativity and innovation, coupled with integrative thinking and an ability to function in a state of continual flux.

Basically, we have to shift education from focusing mainly on the acquisition of knowledge (the central underpinning of most adaptive learning software today) to the development of learner states of being (affect, emotion, self-regulation, goal setting, and so on). Adaptive learners are central to the future of work and society, whereas adaptive learning is more an attempt to make more efficient a system of learning that is no longer needed.

Little monsters and big waves

Jon Dron's blog - July 16, 2016 - 09:51

Some amazing stories have been emerging lately about Pokémon GO, from people wandering through live broadcasts in search of monsters, to lurings of mugging victims, to discoveries of dead bodies, to monsters in art galleries and museums, to people throwing phones to try to capture Pokémons, to it overtaking Facebook in engagement (by a mile), to cafes going from empty to full in a day thanks to one little monster, to people entering closed zoo enclosures and multiple other  dangerous behaviours (including falling off a cliff),  to uses of Pokémon to raise money for charity, to applause for its mental and physical health benefits, to the saving of 27 (real) animals, to religious edicts to avoid it from more than one religion, to cheating boyfriends being found out by following Pokémon GO tracks.

And so on.

Of all of them, my current favourite is the story of the curators of Auschwitz having to ask people not to play the game within its bounds. It's kind of poetic: people are finding fictional monsters and playing games with them in a memorial that is there, more than anything, to remind us of real monsters. We shall soon see a lot more and a lot wilder clashes between reality and augmented reality, and a lot more unexpected consequences, some great, some not. Lives will be lost, lives will be changed. There will be life affirming acts, there will be absurdities, there will be great joy, there will be great sadness. As business models emerge, from buttons to sponsorship to advertising to trading to training, there will be a lot of money being made in a vast, almost instant ecosystem. Above all, there will be many surprises. So many adjacent possibles are suddenly emerging.

AR (augmented reality) has been on the brink of this breakthrough moment for a decade or so. I did not guess that it would explode in less than a week when it finally happened, but here it is. Some might quibble about whether Pokémon GO is actually AR as such (it overlays rather than augments reality), but, if there were once a more precise definition of AR, there isn't any more. There are now countless millions that are inhabiting a digitally augmented physical space, very visibly sharing the same consensual hallucinations, and they are calling it AR. It's not that it's anything new. Not at all. It's the sheer scale of it.  The walls of the dam are broken and the flood has begun.

This is an incredibly exciting moment for anyone with the slightest interest in digital technologies or their effects on society. The fact that it is 'just' a game just makes it all the more remarkable. For some, this seems like just another passing fad: bigger than most, a bit more interesting, but just a fad. Perhaps so. I don't care. For me, it seems like we are witnessing a sudden, irreversible, and massive global shift in our perceptions of the nature of digital systems, of the ways that we can use them, and of what they mean in our lives. This is, with only a slight hint of hyperbole, about to change almost everything.

Aside: it's not VR, by the way

There has been a lot of hype of late around AR's geekier cousin, VR (virtual reality), notably relating to Oculus, HTC Vive, and Playstation VR, but I'm not much enthused. VR has moved only incrementally since the early 90s and the same problems we saw back then persist in almost exactly the same form now, just with more dots.  It's cool, but I don't find the experience is really that much more immersive than it was in the early 90s, once you get over the initial wowness of the far higher fidelity. There are a few big niches for it (hard core gaming, simulation, remote presence, etc), and that's great. But, for most of us, its impact will (in its current forms) not come close to that of PCs, smartphones, tablets, TVs or even games consoles. Something that cuts us off from the real world so completely, especially while it is so conspicuously physically engulfing our heads in big tech, cannot replace very much of what we currently do with computers, and only adds a little to what we can already do without it. Notwithstanding its great value in supporting shared immersive spaces, the new ways it gives us to play with others, and its great potential in games and education, it is not just asocial, it is antisocial. Great big tethered headsets (and even untethered low-res ones) are inherently isolating. We also have a long way to go towards finding a good way to move around in virtual spaces. This hasn't changed much for the better since the early 90s, despite much innovation. And that's not to mention the ludicrous amounts of computing power needed for it by today's standards: my son's HTC Vive requires a small power station to keep it going, and it blows hot air like a noisy fan heater. It is not helped by the relative difficulty of creating high fidelity interactive virtual environments, nor by vertigo issues. It's cool, it's fun, but this is still, with a few exceptions, geek territory. Its big moment will come, but not quite yet, and not as a separate technology: it will be just one of the features that comes for free with AR.

Bigger waves

AR, on the whole, is the opposite of isolating. You can still look into the eyes of others when you are in AR, and participate not just in the world around you, but in an enriched and more social version of it. A lot of the fun of Pokémon GO involves interacting with others, often strangers, and it involves real-world encounters, not avatars. More interestingly, AR is not just a standalone technology: as we start to use more integrated technologies like heads-up displays (HUDs) and projectors, it will eventually envelop VR too, as well as screen-based technologies like PCs, smartphones, TVs, e-readers, and tablets, as well as a fair number of standalone smart devices like the Amazon Echo (though the Internet of Things will integrate interestingly with it). It has been possible to replace screens with glasses for a long time (devices between $100 and $200 abound) but, till now, there has been little point apart from privacy, curiosity, and geek cred. They have offered less convenience than cellphones, and a lot of (literal and figurative) headaches. They are either tethered or have tiny battery lives, they are uncomfortable, they are fragile, they are awkward to use, high resolution versions cost a lot, most are as isolating as VR and, as long as they are a tiny niche product, perhaps most of all, there are some serious social obstacles to wearing HUDs in public. That is all about to change. They are about to become mainstream.

The fact that AR can be done right now with no more than a cellphone is cool and it has been for a few years, but it will get much cooler as the hardware for HUDs becomes better, more widespread and, most importantly, more people share the augmented space. The scale is what makes the Pokémon GO phenomenon so significant, even though it is currently mostly a cellphone and GO Plus thing. It matters because, apart from being really interesting in its own right, soon, enough people will want hardware to match, and that will make it worth going into serious mass production. At that point it gets really interesting, because lots of people will be wearing HUD AR devices.

Google's large-scale Glass experiment was getting there (and it's not over yet), but it was mostly viewed with mild curiosity and a lot of suspicion. Why would any normal person want to look like the Borg? What were the wearers doing with those very visible cameras? What were they hiding? Why bother? The tiny minority that wore them were outsiders, weirdos, geeks, a little creepy. But things have moved on: the use cases have suddenly become very compelling, enough (I think) to overcome the stigma. The potentially interesting Microsoft Hololens, the incredibly interesting Magic Leap, and the rest (Meta 1, Recon Jet, Moverio, etc, etc) that are queueing up in the sidelines are nearly here. Apparently, Pokémon GO with a Hololens might be quite special. Apple's rumoured foray into the field might be very interesting. Samsung's contact-lens camera system is still a twinkling in Samsung's eye, but it and many things even more amazing are coming soon. Further off, as nanotech develops and direct neural interfaces become available, the possibilities are (hopefully not literally) mind blowing.

What this all adds up to is that, as more of us start to use such devices, the computer as an object, even in its ubiquitous small smartphone or smartwatch form, will increasingly disappear. Tools like wearables and smart digital assistants have barely even arrived yet, but their end is palpably nigh. Why bother with a smart watch when you can project anything you wish on your wrist (or anywhere else, for that matter?). Why bother with having to find a device when you are wearing any device you can imagine? Why take out a phone to look for Pokémon? Why look at a screen when you can wear a dozen of them, anywhere, any size, adopting any posture you like? It will be great for ergonomics. This is pretty disruptive: whole industries are going to shrink, perhaps even disappear.

The end of the computer

Futurologists and scifi authors once imagined a future filled with screens, computers, smartphones and visible tech. That's not how it will be at all. Sure, old technologies never die so these separate boxes won't disappear altogether, and there's still plenty of time left for innovation in such things, and vast profits still to be made in them as this revolution begins. There may be a decade or two of growth left for these endangered technologies. But the mainstream future of digital technologies is much more human, much more connected, much more social, much more embedded, and much less visible. The future is AR. The whirring big boxes and things with flashing lights that eat our space, our environment, our attention and our lives will, if they exist at all, be hidden in well-managed farms of servers, or in cupboards and walls. This will greatly reduce our environmental impact, the mountains of waste, the ugliness of our built spaces. I, for one, will be glad to see the disappearance of TV sets, of mountains of wires on my desk, of the stacks of tablets, cellphones, robots, PCs, and e-readers that litter my desktop, cupboards and basement. OK, I'm a bit geeky. But most of our homes and workplaces are shrines to screens and wiring. it's ugly, it's incredibly wasteful, it's inhibiting. Though smartness will be embedded everywhere, in our clothing, our furniture, our buildings, our food, the visible interface will appear on displays that play only in or on our heads, and in or on the heads of those around us, in one massive shared hyperreality, a blend of physical and virtual that we all participate in, perhaps sharing the same virtual space, perhaps a different one, perhaps one physical space, perhaps more. At the start, we will wear geeky goggles, visors and visible high tech, but this will just be an intermediate phase. Pretty soon they will start to look cool, as designers with less of a Star Trek mentality step in. Before long, they will be no more weird than ordinary glasses. Later, they will almost vanish. The end point is virtual invisibility, and virtual ubiquity.

AR at scale

Pokémon GO has barely scratched the surface of this adjacent possible, but it has given us our first tantalizing glimpses of the unimaginably vast realms of potential that emerge once enough people hook into the digitally augmented world and start doing things together in it. To take one of the most boringly familiar examples, will we still visit cinemas when we all have cinema-like fidelity in devices on or in our heads? Maybe. There's a great deal to be said for doing things together in a physical space, as Pokémon GO shows us with a vengeance. But, though we might be looking at the 'same' screen, in the same place, there will be no need to project it. Anywhere can become a cinema just as anywhere can be a home for a Pokémon. Anywhere can become an office. Any space can turn into what we want it to be. My office, as I type this, is my boat. This is cool, but I am isolated from my co-workers and students, channeling all communication with them through the confined boundaries of a screen. AR can remove those boundaries, if I wish. I could be sitting here with friends and colleagues, each in their own spaces or together, 'sitting' in the cockpit with me or bobbing on the water. I could be teaching, with students seeing what I see, following my every move, and vice versa. When my outboard motor needs fixing (it often does) I could see it with a schematic overlay, or receive direct instruction from a skilled mechanic: the opportunities for the service industry, from plumbing to university professoring, are huge. I could replay events where they happened, including historical events that I was not there to see, things that never happened, things that could happen in the future, what-if scenarios, things that are microscopically small, things that are unimaginably huge, and so on. This is a pretty old idea with many mature existing implementations (e.g. here, here, here and here). Till now they have been isolated phenomena, and most are a bit clunky. As this is accepted as the mainstream, it will cascade into everything. Forget rose-tinted spectacles: the world can be whatever I want it to become. In fact, this could be literally true, not just virtually: I could draw objects in the space they will eventually occupy (such virtual sculpture apps already exist for VR), then 3D print them. 

Just think of the possibilities for existing media. Right now I find it useful to work on multiple monitors because the boundaries of one screen are insufficient to keep everything where I need it at once. With AR, I can have dozens of them or (much more interestingly) forget the 'screen' metaphor altogether and work as fluidly as I like with text, video, audio and more, all the while as aware of the rest of my environment, and the people in it, as I wish. Computers, including cellphones, isolate: they draw us into them, draw our gaze away from the world around us. AR integrates with that world, and integrates us with it, enhancing both physical and virtual space, enhancing us. We are and have only ever been intelligent as a collective, our intelligence embedded in one another and in the technologies we share. Suddenly, so much more of that can be instantly available to us. This is seriously social technology, albeit that there will be some intriguing and messy interpersonal problems when each of us might be  engaged in a private virtual world while outwardly engaging in another. There are countless ways this could (and will) play out badly.

Or what about a really old technology? I now I have hundreds of e-books that sit forgotten, imprisoned inside that little screen, viewable a page at a time or listed in chunks that fit the dimensions of the device. Bookshelves - constant reminders of what we have read and augmenters of our intellects - remain one of the major advantages of p-books, as does their physicality that reveals context, not just text. With AR, I will be able to see my whole library (and other libraries and bookstores, if I wish), sort it instantly, filter it, seek ideas and phrases, flick through books as though they were physical objects, or view them as a scroll, or one large sheet of virtual paper, or countless other visualizations that massively surpass physical books as media that contribute to my understanding of the text. Forget large format books for images: they can be 20 metres tall if we want them to be. I'll be able to fling pages, passages, etc onto the wall or hovering in the air, shuffle them, rearrange them, connect them. I'll be able to make them disappear all at once, and reappear in the same form when I need them again. The limits are those of the imagination, not the boundaries of physical space. We will no doubt start by skeuomorphically incorporating what we already know but, as the adjacent possibles unfold, there will be no end to the creative potential to go far, far beyond that. This is one of the most boring uses of AR I can think of, but it is still beyond magical.

We will, surprisingly soon, continuously inhabit multiple worlds - those of others, those others invent, those that are abstract, those that blend media, those that change what we perceive, those that describe it, those that explain it, those that enhance it, those we assemble or create for ourselves. We will see the world through one another's eyes, see into one another's imaginations, engage in multiple overlapping spaces that are part real, part illusion, and we will do so with others, collocated and remote, seamlessly, continuously. Our devices will decorate our walls, analyze our diets, check our health. Our devices won't forget things, will remember faces, birthdays, life events, connections. We may all have eidetic memories, if that is what we want. While cellphones make our lives more dangerous, these devices will make them safer, warning us when we are about to step into the path of an oncoming truck as we monitor our messages and news. As smartness is embedded in the objects around us, our HUDs will interact with them: no more lost shirts, no guessing the temperature of our roasts, no forgetting to turn off lights. We will gain new senses - seeing in the dark, even through walls, will become commonplace. We will, perhaps, sense small fluctuations in skin temperature to help us better understand what people are feeling. Those of us with visual impairment (most of us) will be able to zoom in, magnify, have text read to us, or delve deeper through QR codes or their successors. Much of what we need to know now will be unnecessary (though we will still enjoy discovering it, as much as we enjoy discovering monsters) but our ability to connect it will grow exponentially. We won't be taking devices out of our pockets to do that, nor sitting in front of brightly lit screens. 

We will very likely become very dependent on these ubiquitous, barely visible devices, these prostheses for the mind. We may rarely take them off. Not all of this will be good. Not by a mile. When technologies change us, as they tend to do, many of those changes tend to be negative. When they change us a lot, there will be a lot of negatives, lots of new problems they create as well as solve, lots of aggregations and integrations that will cause unforeseen woes. This video at vimeo.com/166807261 shows a nightmare vision of what this might be like, but it doesn't need to be a nightmare: we will need to learn to tame it, to control it, to use it wisely. Ad blockers will work in this space too.

https://vimeo.com/166807261

What comes next

AR has been in the offing for some time, but mainly as futuristic research in labs, half-baked experimental products like Google Glass, or 'hey wow' technologies like Layar, Aurasma, Google Translate, etc. Google, Facebook, Apple, Microsoft, Sony, Amazon, all the big players, as well as many thousands of startups, are already scrabbling frantically to get into this space, and to find ways to use what they already have to better effect. I suspect they are looking at the Pokémon GO phenomenon with a mix of awe, respect, and avarice (and, in Google's case, perhaps a hint of regret). Formerly niche products like Google Tango or Structure Sensor are going to find themselves a lot more in the spotlight as the value of being able to accurately map physical space around us becomes ever greater. Smarter ways of interacting, like this at www.youtube.com/watch?v=UA_HZVmmY84, will sprout like weeds.

https://www.youtube.com/watch?v=UA_HZVmmY84

People are going to pay much more attention to existing tools and wonder how they can become more social, more integrated, more fluid, less clunky. We are going to need standards: isolated apps are quite cool, but the big possibilities occur when we are able to mash them up, integrate them, allow them to share space with one another. It would be really useful if there were an equivalent of the World Wide Web for the augmented world: a means of addressing not just coordinates but surfaces, objects, products, trees, buildings, etc, that any application could hook into, that is distributed and open, not held by those that control the APIs. We need spatial and categorical hyperlinks between things that exist in physical and virtual space. I fear that, instead, we may see more of the evils of closed APIs controlled by organizations like Facebook, Google, Apple, Microsoft, Amazon, and their kin. Hopefully they will realise that they will get bigger benefits from expanding the ecosystem (I think Google might get this first) but there is a good chance that short-termist greed will get the upper hand instead. The web had virgin, non-commercial ground in which to flourish before the bad people got there. I am not sure that such a space exists any more, and that's sad. Perhaps HTML 6 will extend into physical space. That might work. Every space, every product, every plant, every animal, every person, addressable via a URL.

There will be ever more innovations in battery or other power/power saving technologies, display technologies and usability: the abysmal battery life of current devices, in particular, will soon be very irritating. There will likely be a lot of turf wars as different cloud services compete for user populations, different standards and APIs compete for apps, and different devices compete for customers. There will be many acquisitions. Privacy, already a major issue, will take a pounding, as new ways of invading it proliferate. What happens when Google sees all that you see? Measures your room with millimetre accuracy? Tracks every moment of your waking life? What happens when security services tap in? Or hackers? Or advertisers? There will be kickback and resistance, much of it justified. New forms of DRM will struggle to contain what needs to be free: ownership of digital objects will be hotly contested. New business models (personalized posters anyone? in situ personal assistants? digital objects for the home? mashup museums and galleries?) will enrage us, inform us, amuse us, enthrall us. Facebook, temporarily wrong footed in its ill-considered efforts to promote Oculus, will come back with a vengeance and find countless new ways to exploit us (if you think it is bad now, imagine what it will be like when it tracks our real-world social networks). The owners of the maps and the mapped data will become rich: Niantic is right now sitting on a diamond as big as the Ritz. We must be prepared for new forms of commerce, new sources of income, new ways of learning, new ways of understanding, new ways of communicating, new notions of knowledge, new tools, new standards, new paradigms, new institutions, new major players, new forms of exploitation, new crimes, new intrusions, new dangers, new social problems we can so far barely dream of. It will certainly take years, not months, for all of this to happen, though it is worth remembering that network effects kick in fast: the Pokémon GO only took a few days. It is coming, significant parts of it are already here, and we need to be preparing for it now. Though the seeds have been germinating for many years, they have germinated in relatively isolated pockets. This simple game has opened up the whole ecosystem. 

Pokéducation

I guess, being an edtech blogger, I should say a bit more about the effects of Pokémon GO on education but that's mostly for another post, and much of it is implied in what I have written so far. There have been plenty of uses of AR in conventional education so far, and there will no doubt be thousands of ways that people use Pokémon GO in their teaching (some great adjacent possibles in locative, gamified learning), as well as ways to use the countless mutated purpose-built forms that will appear any moment now, and that will be fun, though not earth shattering. I have, for instance, been struggling to find useful ways to use geocaching in my teaching (of computing etc) for over a decade, but it was always too complex to manage, given that my students are mostly pretty sparsely spread across the globe: basically, I don't have the resources to populate enough geocaches. The kind of mega-scale mapping that Niantic has successfully accomplished could now make this possible, if they open up the ecosystem. However, most uses of AR will, at first, simply extend the status quo, letting us do better what we have always done and that we only needed to do because of physics. The real disruption, the result of the fact we can overcome physics, will take a while longer, and will depend on the ubiquity of more integrated, seamlessly networked forms of AR. When the environment is smart, the kind of intelligence we need to make use of it is quite different from most of what our educational systems are geared up to provide. When connection between the virtual and physical is ubiquitous, fluid and high fidelity, we don't need to limit ourselves to conventional boundaries of classes, courses, subjects and schools. We don't need to learn today what we will only use in 20 years time. We can do it now. Networked computers made this possible. AR makes it inevitable. I will have more to say about this.

This is going to change things. Lots of things.

 

Doctor of Education: Athabasca University

elearnspace (George Siemens) - July 15, 2016 - 06:36

Athabasca University has the benefit of offering one of the first doctor of education programs, fully online, in North America. The program is cohort-based and accepts 12 students annually. I’ve been teaching in the doctorate program for several years (Advanced Research Methods as well as, occasionally, Teaching & Learning in DE) and supervise 8 (?!) doctoral students currently.

Applications for the fall 2017 start are now being accepted with a January 15, 2017 deadline. Just in case you’re looking to get your doctorate . It really is a top program. Terrific faculty and tremendous students.

Cocktails and educational research

Jon Dron's blog - June 28, 2016 - 15:27

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin's lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Unfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller's 'A crude look at the whole: the science of complex systems in business, life and society' which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there's a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn't go with your shoes, it doesn't matter how good either are separately. The same is true of drugs. You can't simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways - about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it's exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ' better', ie. (for the most part) to help ensure that students' behaviour complies with teachers' intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that - so many anthills here) impossible. It doesn't stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are - not to put too fine a point on it - almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that's  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund's abstract states his main points succinctly:

"The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project."

You could simply substitute 'education' for 'psychology' in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller's simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can't deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn't mean that we cannot know anything: it just means it's a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result - that might be useful.

 

 

Oh yes, that's why I left

Jon Dron's blog - June 24, 2016 - 18:13

England is a weird, sad, angry little country, where there is now unequivocal evidence that over half the population - mainly the older ones - believe that experts know nothing, and that foreigners (as well as milllions of people born there with darker than average skins) are evil. England is a place filled with drunkenness and random violence, where it's not safe to pass a crowd of teenagers - let alone a crowd of football supporters - on a street corner, where you cannot hang Xmas decorations outside for fear of losing them, where your class still defines you forever, where whinging is a way of life, where kindness is viewed with suspicion, where barbed wire fences protect schools from outsiders (or vice versa - hard to fathom), where fuckin' is a punctuation mark to underline what follows, not an independent word. It's a nation filled with fierce and inhospitable people, as Horace once said, and it always has been. For all the people and places that I love and miss there, for all its very many good people and slowly vanishing places that are not at all like that, for all its dark and delicious humour, its eccentricity, its diversity, its cheeky irreverance, its feistiness, its relentless creativity, its excellent beer, its pork pies and its pickled onions, all of which I miss, that's why I was glad to leave it.

It saddens and maddens me to see the country of my birth killing or, at least, seriously maiming itself in such a spectacularly and wilfully ignorant way, taking the United Kingdom, and possibly even the EU itself with it, as well as causing injury to much of the world, including Canada. England is a country-sized suicide bomber. Hopefully this mob insanity will eventually be a catalyst for positive change, if not in England or Wales then at least elsewhere. Until today I opposed Scottish independence, because nationalism is almost uniformly awful and the last thing we need in the world is more separatism, but it is far better to be part of something big and expansive like the EU than an unwilling partner in something small in soul and mind like the UK. Maybe Ireland will unify and come together in Europe. Perhaps Gibraltar too. Maybe Europe, largely freed of the burden of supporting and catering for the small-minded needs of my cantankerous homeland, will rise to new heights. I hope so, but it's a crying shame that England won't be a part of that. 

I am proud, though, of my home city, Brighton, the place where English people who don't want to live in England live. About 70% of Brightonians voted to stay in the EU. Today I am proudly Brightonian, proudly European, but ashamed to be English. 

 

 

Digital Learning Research Network Conference 2016

elearnspace (George Siemens) - June 21, 2016 - 09:35

As part of the Digital Learning Research Network, we held our first conference at Stanford last year.

The conference focused on making sense of higher education. The discussions and prsentations addressed many of the critical challenges faced by learners, educators, administrators, and others. The schedule and archive are available here.

This year, we are hosting the 2nd dLRN conference in downtown Fort Worth, October 21-22 The conference call for papers is now open. I’m interested in knowledge that exists in the gaps between domains. For dLRN15, we wanted to socialize/narrativize the scope of change that we face as a field.

The framework of changes can’t be understood through traditional research methods. The narrative builds the house. The research methods and approaches furnish it. Last year we started building the house. This year we are outfitting it through more traditional research methods. Please consider a submission (short, relatively pain free). Hope to see you in Fort Worth, in October!

We have updated our dLRN research website with the current projects and related partners…in case you’d like an overview of the type of research being conducted and that will be presented at #dLRN16. The eight projects we are working on:

1. Collaborative Reflection Activities Using Conversational Agents
2. Onboarding and Outcomes
3. Mindset and Affect in Statistical Courses
4. Online Readiness Modules and Student Success
5. Personal Learning Graphs
6. Supporting Team-Based Learning in MOOCs
7. Utilizing Datasets to Collaboratively Create Interventions
8. Using Learning Analytics to Design Tools for Supporting Academic Success in Higher Education

Can The Sims Show Us That We’re Inherently Good or Evil?

Jon Dron's bookmarks - June 12, 2016 - 11:28

As it turns out, yes.

The good news is that we are intuitively altruistic. This doesn't necessarily mean we are born that way. This is probably learned behaviour that co-evolves with that of those around us. The hypothesis on which this research is based (with good grounding) is that we learn through repeated interactions to behave kindly to others. At least, by far the majority of us. A few jerks (as the researchers discovered) are not intuitively generous and everyone behaves selfishly or unkindly sometimes. This is mainly because there are such jerks around, though sometimes because the perceived rewards for being a jerk might outweigh the benefits. Indeed, in almost all moral decisions, we tend to weigh benefits against harm, and it is virtually impossible to do anything at all without at least some harm being caused in some way, so the nicest of us are jerks to at least some people. It might upset the person who gave you a beautiful scarf that you wrecked it while saving a drowning child, for instance. Donating to a charity might reduce the motivation of governments to intervene in humaniarian crises. Letting a car in front of you to change lanes in front of you slows everyone in the queue behind you. Very many acts of kindness have costs to others. But, on the whole, we tend towards kindness, if only as an attitude. There is plentiful empirical evidence that this is true, some of which is referred to in the article. The researchers sought an explanation at a systemic, evolutionary level.

The researchers developed a simulation of a Prisoners' Dilemma scenario. Traditional variants on the game make use of rational agents that weigh up defection and cooperation over time in deciding whether or not to defect, using a variety of different rules (the most effective of which is usually the simplest 'tit-for-tat'). Their twist was to allow agents to behave 'intuitively' under some circumstances. Some agents were intuitively selfish, some not. In predominantly multiple round games,  "the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game." In predominantly one-shot games - not the norm in human societies - the always-cooperative agents died out completely. Selfish agents that deliberated did not do well in any scenario. As ever, ubiquitous selfish behaviour in a many-round game means that everyone loses, especially the selfish players.  So, wary cooperation is a winning strategy when most other people are kind, and it benefits everyone so it is a winning strategy for societies and favoured by evolution. The explanation, they suggest is that:

"when your default is to betray, the benefits of deliberating—seeing a chance to cooperate—are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating—occasionally acting selfishly—accrue no matter what your partner does, and therefore deliberation makes more sense."

This accords with our natural inclinations. As Rand, one of the researchers, puts it:  “It feels good to be nice—unless the other person is a jerk. And then it feels good to be mean.” If there are no rewards for being a jerk under any circumstances, or the rewards for being kind are greater, then perhaps we can all learn to be a bit nicer.

The really good news is that, because such behaviour is learned, selfish behaviour can be modified and intuitive responses can change. In experiments, the researchers have demonstrated that this can occur within less than half an hour, albeit in a very limited and artificial single context. The researchers suggest that, in situations that reward back-stabbing and ladder-climbing (the norm in corporate culture), all it should take is a little top-down intervention such as bonuses and recognition for helpful behaviour in order to set a cultural change in motion that will ultimately become self-sustaining. I'm not totally convinced by that - extrinsic reward does not make lessons stick and the learning is lost the moment the reward is taken away. However, because cooperation is inherently better for everyone than selfishness, perhaps those that are driven by such things might realize that those extrinsic rewards they crave are far better achieved through altruism than through selfishness as long as most people are acting that way most of the time, and this might be a way to help create such a culture.  Getting rid of divisive and counter-productive extrinsic motivation, such as performance-related pay, might be a better (or at least complementary) long-term approach.

Address of the bookmark: http://nautil.us/issue/37/currents/selfishness-is-learned

Announcing: aWEAR Conference: Wearables and Learning

elearnspace (George Siemens) - May 28, 2016 - 09:40

Over the past year, I’ve been whining about how wearable technologies will have a bigger impact on how we learn, communicate, and function as a society than mobile devices have had to date. Fitness trackers, smart clothing, VR, heart rate monitors, and other devices hold promising potential in helping understand our learning and our health. They also hold potential for misuse (I don’t know the details behind this, but the connection between affective states with nudges for product purchases is troubling).

Over the past six months, we’ve been working on pulling together a conference to evaluate, highlight, explore, and engage with prominent trends in wearable technologies in the educational process. The http://awear.interlab.me“>aWEAR conference will be held Nov 14-15 at Stanford. The call for participation is now open. Short abstracts, 500 words, are due by July 31, 2016. We are soliciting conceptual, technological, research, and implementation papers. If you have questions or are interested in sponsoring or supporting the conference, please send me an email

From the site:

The rapid development of mobile phones has contributed to increasingly personal engagement with our technology. Building on the success of mobile, wearables (watches, smart clothing, clinical-grade bands, fitness trackers, VR) are the next generation of technologies offering not only new communication opportunities, but more importantly, new ways to understand ourselves, our health, our learning, and personal and organizational knowledge development.

Wearables hold promise to greatly improve personal learning and the performance of teams and collaborative knowledge building through advanced data collection. For example, predictive models and learner profiles currently use log and clickstream data. Wearables capture a range of physiological and contextual data that can increase the sophistication of those models and improve learner self-awareness, regulation, and performance.

When combined with existing data such as social media and learning management systems, sophisticated awareness of individual and collaborative activity can be obtained. Wearables are developing quickly, including hardware such as fitness trackers, clothing, earbuds, contact lens and software, notably for integration of data sets and analysis.

The 2016 aWEAR conference is the first international wearables in learning and education conference. It will be held at Stanford University and provide researchers and attendees with an overview of how these tools are being developed, deployed, and researched. Attendees will have opportunities to engage with different wearable technologies, explore various data collection practices, and evaluate case studies where wearables have been deployed.

University Title Generator

Jon Dron's bookmarks - May 27, 2016 - 13:12

So this is how job titles at our university are thought up! I knew there had to be a rational explanation. Wonderful.

Press the button for an endless supply of uncannily familiar job titles. I've not yet found one that precisely matches one of ours, but they are often very close indeed.

Address of the bookmark: http://universitytitlegenerator.com/

What does it mean to be human in a digital age?

elearnspace (George Siemens) - May 22, 2016 - 16:53

It has been about 30 months now since I took on the role to lead the LINK Research Lab at UTA. (I have retained a cross appointment with Athabasca University and continue to teach and supervise doctoral students there).

It has taken a few years to get fully up and running – hardly surprising. I’ve heard explanations that a lab takes at least three years to move from creation to research identification to data collection to analysis to publication. This post summarizes some of our current research and other activities in the lab.

We, as a lab, have had a busy few years in terms of events. We’ve hosted numerous conferences and workshops and engaged in (too) many research talks and conference presentations. We’ve also grown significantly – from an early staff base of four people to expected twenty three within a few months. Most of these are doctoral or post doctoral students and we have a terrific core of administrative and support staff.

Finding our Identity

In trying to find our identity and focus our efforts, we’ve engaged in numerous activities including book clubs, writing retreats, innovation planning meetings, long slack/email exchanges, and a few testy conversations. We’ve brought in well over 20 established academics and passionate advocates as speakers to help us shape our mission/vision/goals. Members of our team have attended conferences globally, on topics as far ranging as economics, psychology, neuroscience, data science, mindfulness, and education. We’ve engaged with state, national, and international agencies, corporations, as well as the leadership of grant funding agencies and major foundations. Overall, an incredible period of learning as well as deepening existing relationships and building new ones. I love the intersections of knowledge domains. It’s where all the fun stuff happens.

As with many things in life, the most important things aren’t taught. In the past, I’ve owned businesses that have had an employee base of 100+ personnel. There are some lessons that I learned as a business owner that translate well into running a research lab, but with numerous caveats. Running a lab is an entrepreneurial activity. It’s the equivalent of creating a startup. The intent is to identify a key opportunity and then, driven by personal values and passion, meaningfully enact that opportunity through publications, grants, research projects, and collaborative networks. Success, rather than being measured in profits and VC funds, is measured by impact with the proxies being research funds and artifacts (papers, presentations, conferences, workshops). I find it odd when I hear about the need for universities to be more entrepreneurial as the lab culture is essentially a startup environment.

Early stages of establishing a lab are chaotic. Who are we? What do we care about? How do we intersect with the university? With external partners? What are our values? What is the future that we are trying to create through research? Who can we partner with? It took us a long time to identify our key research areas and our over-arching research mandate. We settled on these four areas: new knowledge processes, success for all learners, the future of employment, and new knowledge institutions. While technologies are often touted as equalizers that change the existing power structure by giving everyone a voice, the reality is different. In our society today, a degree is needed to get a job. In the USA, degrees are prohibitively expensive to many learners and the result is a type of poverty lock-in that essentially guarantees growing inequality. While it’s painful to think about, I expect a future of greater racial violence, public protests, and radicalized politicians and religious leaders and institutions. Essentially the economic makeup of our society is one where higher education now prevents, rather than enables, improving one’s lot in life.

What does it mean to be human in a digital age?

Last year, we settled on a defining question: What does it mean to be human in a digital age? So much of the discussion in society today is founded in a fetish to talk about change. The narrative in media is one of “look what’s changing”. Rarely is the surface level assessment explored to begin looking at “what are we becoming?”. It’s clear that there is much that is changing today: technology, religious upheaval, radicalization, social/ethnic/gender tensions, climate, and emerging super powers. It is an exciting and a terrifying time. The greatest generation created the most selfish generation. Public debt, failing social and health systems, and an eroding social fabric suggest humanity is entering a conflicted era of both turmoil and promise.

We can better heal than any other generation. We can also better kill, now from the comfort of a console. Globally, less people live in poverty than ever before. But income inequality is also approaching historical levels. This inequality will explode as automated technologies provide the wealthiest with a means to use capital without needing to pay for human labour. Technology is becoming a destroyer, not enabler, of jobs. The consequences to society will be enormous, reflective of the “spine of the implicit social contract” being snapped due to economic upheaval. The effects of uncertainty, anxiety, and fear are now being felt politically as reasonably sane electorates turn to solutionism founded in desire rather than reality (Middle East, Austria, Trump in the US to highlight only a few).

In this milieu of social, technology, and economic transitions, I’m interested in understanding our humanity and what we are becoming. It is more than technology alone. While I often rant about this through the perspective of educational technology, the challenge has a scope that requires thinking integratively and across boundaries. It’s impossible to explore intractable problems meaningfully through many of the traditional research approaches where the emphasis is on reducing to variables and trying to identify interactions. Instead, a complex and connected view of both the problem space and the research space is required. Trying to explore phenomena through single variable relationships is not going to be effective in planning

Complex and connected explorations are often seen to be too grandiose. As a result, it takes time for individuals to see the value of integrative, connected, and complex answers to problems that also possess those attributes. Too many researchers are accustomed to working only within their lab or institutions. Coupled with the sound-bite narrative in media, sustained and nuanced exploration of complex social challenges seems almost unattainable. At LINK we’ve been actively trying to distribute research much like content and teaching has become distributed. For example, we have doctoral and post-doctoral students at Stanford, Columbia, and U of Edinburgh. Like teaching, learning, and living, knowledge is also networked and the walls of research need the same thinning that is happening to many classrooms. Learning to think in networks is critical and it takes time, especially for established academics and administrators. What I am most proud of with LINK is the progress we have made in modelling and enacting complex approaches to apprehending complex problems.

In the process of this work, we’ve had many successes, detailed below, but we’ve also encountered failures. I’m comfortable with that. Any attempt to innovate will produce failure. At LINK, we tried creating a grant writing network with faculty identified by deans. That bombed. We’ve put in hundreds of hours writing grants. Many of which were not funded. We were involved in a Texas state liberal arts consortium. That didn’t work so well. We’ve cancelled workshops because they didn’t find the resonance we were expecting. And hosted conferences that didn’t work out so well financially. Each failure though, produced valuable insight in sharpening our focus as a lab. While the first few years were primarily marked by exploration and expansion, we are now narrowing and focusing on those things that are most important to our central emphasis on understanding being human in a digital age.

Grants and Projects

It’s been hectic. And productive. And fun. It has required a growing team of exceptionally talented people – we’ll update bios and images on our site in the near future, but for now I want to emphasize the contributions of many members of LINK. It’s certainly not a solo task. Here’s what we’ve been doing:

1. Digital Learning Research Network. This $1.6m grant (Gates Foundation) best reflects my thinking on knowing at intersections and addressing complex problems through complex and nuanced solutions. Our goal here is to create research teams with R1 and state systems and to identify the most urgent research needs in helping under-represented students succeed.

2. Inspark Education. This $5.2m grant (Gates Foundation) involves multiple partners. LINK is researching the support system and adaptive feedback models required to help students become successful in studying science. The platform and model is the inspiration of the good people at Smart Sparrow (also the PIs) and the BEST Network (medical education) in Australia and the Habworlds project at ASU.

3. Intel Education. This grant ($120k annually) funds several post doctoral students and evaluates effectiveness of adaptive learning as well as the research evidence that supports algorithms that drive adaptive learning.

4. Language in conflict. This project is being conducted with several universities in Israel and looks at how legacy conflict is reflected in current discourse. The goal is to create a model for discourse that enables boundary crossing. Currently, the pilot involves dialogue in highly contentious settings (Israeli and Palestinian students) and builds dialogue models in order to reduce legacy dialogue on impacting current understanding. Sadly, I believe this work will have growing relevance in the US as race discourse continues to polarize rather than build shared spaces of understanding and respect.

5. Educational Discourse Research. This NSF grant ($254k) is conducted together with University of Michigan. The project is concerned with evaluating the current state of discourse research and to determine where this research is trending and what is needed to support this community.

6. Big Data: Collaborative Research. This NSF grant ($1.6m), together with CMU, evaluates the impact of how different architectures of knowledge spaces impacts how individuals interact with one another and build knowledge. We are looking at spaces like wikipedia, moocs, and stack overflow. Space drives knowledge production, even (or especially) when that space is digital.

7. aWEAR Project. This project will evaluate the use of wearables and technologies that collect physiological data as learners learn and live life. We’ll provide more information on this soon, in particular a conference that we are organizing at Stanford on this in November.

8. Predictive models for anticipating K-12 challenges. We are working with several school systems in Texas to share data and model challenges related to school violence, drop out, failure, and related emotional and social challenges. This project is still early stages, but holds promise in moving the mindset from one of addressing problems after they have occurred to one of creating positive, developmental, and supportive skillsets with learners and teachers.

9. A large initiative at University of Texas Arlington is the formation of a new department called University Analytics (UA). This department is lead by Prof Pete Smith and is a sister organization to LINK. UA will be the central data and learning analytics department at UTA. SIS, LMS, graduate attributes, employment, etc. will be analyzed by UA. The integration between UA and LINK is one of improving the practice-research-back to practice pipeline. Collaborations with SAS, Civitas, and other vendors are ongoing and will provide important research opportunities for LINK.

10. Personal Learning/Knowledge Graphs and Learner profiles. PLeG is about understanding learners and giving them control over their profiles and their learning history. We’ve made progress on this over the past year, but are still not at a point to release a “prototype” of PLeG for others to test/engage with.

11. Additional projects:
- InterLab – a distributed research lab, we’ll announce more about this in a few weeks.
- CIRTL – teaching in STEM disciplines
- Coh-Metrix – improving usability of the language analysis tool

Going forward

I know I’ve missed several projects, but at least the above list provides an overview of what we’ve been doing. Our focus going forward is very much on the social and affective attributes of being human in our technological age.

Human history is marked by periods of explosive growth in knowledge. Alexandria, the Academy, the printing press, the scientific method, industrial revolution, knowledge classification systems, and so on. The rumoured robotics era seems to be at our doorstep. We are the last generation that will be smarter than our technology. Work will be very different in the future. The prospect of mass unemployment due to automation is real. Technology is changing faster than we can evolve individually and faster than we can re-organize socially. Our future lies not in our intelligence but in our being.

But.

Sometimes when I let myself get a bit optimistic, I’m encouraged by the prospect of what can become of humanity when our lives aren’t defined by work. Perhaps this generation of technology will have the interesting effect of making us more human. Perhaps the next explosion of innovation will be a return to art, culture, music. Perhaps a more compassionate, kinder, and peaceful human being will emerge. At minimum, what it means to be human in a digital age has not been set in stone. The stunning scope of change before us provides a rare window to remake what it means to be human. The only approach that I can envision that will help us to understand our humanness in a technological age is one that recognizes nuance, complexity, and connectedness and that attempts to match solution to problem based on the intractability of the phenomena before us.

The Godfather: Gardner Campbell

elearnspace (George Siemens) - May 18, 2016 - 13:52

Gardner Campbell looms large in educational technology. People who have met him in person know what I mean. He is brilliant. Compassionate. Passionate. And a rare visionary. He gives more than he takes in interactions with people. And he is years ahead of where technology deployment current exists in classrooms and universities.

He is also a quiet innovator. Typically, his ideas are adopted by other brash, attention seeking, or self-serving individuals. Go behind the bravado and you’ll clearly see the Godfather: Gardner Campbell.

Gardner was an originator of what eventually became the DIY/edupunk movement. Unfortunately, his influence is rarely acknowledged.

He is also the vision behind personal domains for learners. I recall a presentation that Gardner did about 6 or 7 years ago where he talked about the idea of a cpanel for each student. Again, his vision has been appropriated by others with greater self-promotion instincts. Behind the scenes, however, you’ll see him as the intellectual originator.

Several years ago, when Gardner took on a new role at VCU, he was rightly applauded in a press release:

Gardner’s exceptional background in innovative teaching and learning strategies will ensure that the critical work of University College in preparing VCU students to succeed in their academic endeavors will continue and advance…Gardner has also been an acknowledged leader in the theory and practice of online teaching and education innovation in the digital age

And small wonder that VCU holds him in such high regard. Have a look at this talk:

Recently I heard some unsettling news about position changes at VCU relating to Gardner’s work. In true higher education fashion, very little information is forthcoming. If anyone has updates to share, anonymous comments are accepted on this post.

There are not many true innovators in our field. There are many who adopt ideas of others and popularize them. But there are only a few genuinely original people doing important and critically consequential work: Ben Werdmuller, Audrey Watters, Stephen Downes, and Mike Caulfield. Gardner is part of this small group of true innovators. It is upsetting that the people who do the most important work – rather than those with the loudest and greatest self-promotional voice – are often not acknowledged. Does a system like VCU lack awareness of the depth and scope of change in the higher education sector? Is their appetite for change and innovation mainly a surface level media narrative?

Leadership in universities has a responsibility to research and explore innovation. If we don’t do it, we lose the narrative to consulting and VC firms. If we don’t treat the university as an object of research, an increasingly unknown phenomena that requires structured exploration, we essentially give up our ability to contribute to and control our fate. Instead of the best and brightest shaping our identity, the best marketers and most colourful personalities will shape it. We need to ensure that the true originators are recognized and promoted so that when narrow and short-sighted leaders make decisions, we can at least point them to those who are capable of lighting a path.

Thanks for your work and for being who you are Gardner.

Former Facebook Workers: We Routinely Suppressed Conservative News

Jon Dron's bookmarks - May 12, 2016 - 08:14

The unsurprising fact that Facebook selectively suppresses and promotes different things has been getting a lot of press lately. I am not totally convinced yet that this particular claim of political bias itself is 100% credible: selectively chosen evidence that fits a clearly partisan narrative from aggrieved ex-employees should at least be viewed with caution, especially given the fact that it flies in the face of what we know about Facebook. Facebook is a deliberate maker of filter bubbles, echo chambers and narcissism amplifiers and it thrives on giving people what it thinks they want. It has little or no interest in the public good, however that may be perceived, unless that drives growth. It just wants to increase the number and persistence of eyes on its pages, period. Engagement is everything. Zuckerberg's one question that drives the whole business is "Does it make us grow?" So, it makes little sense that it should selectively ostracize a fair segment of its used/users.

This claim reminds me of those that attack the BBC for both its right wing and its left wing bias. There are probably those that critique it for being too centrist too. Actually, in the news today, NewsThump, noting exactly that point, sums it up well. The parallels are interesting. The BBC is a deliberately created institution, backed by a government, with an aggressively neutral mission, so it is imperative that it does not show bias. Facebook has also become a de facto institution, likely with higher penetration than the BBC. In terms of direct users it is twenty times the size of the entire UK population, albeit that BBC programs likely reach a similar number of people. But it has very little in the way of ethical checks and balances beyond legislation and popular opinion, is autocratically run, and is beholden to no one but its shareholders. Any good that it does (and, to be fair, it has been used for some good) is entirely down to the whims of its founder or incidental affordances. For the most part, what is good for Facebook is not good for its used/users. This is a very dangerous way to run an institution.

Whether or not this particular bias is accurately portrayed, it does remain highly problematic that what has become a significant source of news, opinion and value setting for about a sixth of the world's population is clearly susceptible to systematic bias, even if its political stance remains, at least in intent and for purely commercial reasons, somewhat neutral. For a site in such a position of power, though, almost every decision becomes a political decision. For instance, though I approve of its intent to ban gun sales on the site, it is hard not to see this as a politically relevant act, albeit one that is likely more driven by commercial/legal concerns than morality (it is quite happy to point you to a commercial gun seller instead). It is the same kind of thing as its reluctant concessions to support basic privacy control, or its banning of drug sales: though ignoring such issues might drive more engagement from some people, it would draw too much flak and ostracize too many people to make economic sense. It would thwart growth.

The fact that Facebook algorithmically removes 95% or more of potentially interesting content, and then uses humans to edit what else it shows, makes it far more of a publisher than a social networking system. People are farmed to provide stories, rather than paid to produce them, and everyone gets a different set of stories chosen to suit their perceived interests, but the effect is much the same. As it continues with its unrelenting and morally dubious efforts to suck in more people and keep them for more of the time, with ever more-refined and more 'personalized' (not personal) content, its editorial role will become ever greater. People will continue to use it because it is extremely good at doing what it is supposed to do: getting and keeping people engaged. The filtering is designed to get and keep more eyes on the page and the vast bulk of effort in the company is focused wholly and exclusively on better ways of doing that. If Facebook is the digital equivalent of a drug pusher (and, in many ways, it is) what it does to massage its feed is much the same as refining drugs to increase their effects and their addictive qualities. And, like actual drug pushing that follows the same principles, the human consequences matter far less than Facebook's profits. This is bad.

There's a simple solution: don't use Facebook. If you must be a Facebook user, for whatever reason, don't let it use you. Go in quickly and get out (log out, clear your cookies) right away, ideally using a different browser and even a different machine than the one you would normally use. Use it to tell people you care about where to find you, then leave. There are hundreds of millions of far better alternatives - small-scale vertical social media like the Landing, special purpose social networks like LinkedIn (which has its own issues but a less destructive agenda) or GitHub, less evil competitors like Google+, junctions and intermediaries like Pinterest or Twitter, or hundreds of millions of blogs or similar sites that retain loose connections and bottom-up organization. If people really matter to you, contact them directly, or connect through an intermediary that doesn't have a vested interest in farming you.

Address of the bookmark: http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006

The Future of Learning: Digital, Distributed, Data-Driven

elearnspace (George Siemens) - May 12, 2016 - 05:07

Yesterday as I was traveling (with free wifi from the good folks at Norwegian Air, I might add), I caught this tweet from Jim Groom:

@dkernohan @cogdog @mweller A worthwhile think piece for sure, almost up there with "China is My Analytics Co-Pilot"

— Jim Groom (@jimgroom) May 11, 2016

The comment was in response to my previous post where I detailed my interest in understanding how learning analytics were progressing in Chinese education. My first internal response was going to be something snarky and generally defensive. We all build in different ways and toward different visions. It was upsetting to have an area of research interest be ridiculed. Cause I’m a baby like that. But I am more interested in learning than in defending myself and my interests. And I’m always willing to listen to the critique and insight that smart people have to offer. This comment stayed with me as I finalized my talk in Trondheim.

What is our obligation as educators and as researchers to explore research interests and knowledge spaces? What is our obligation to pursue questions about unsavoury topics that we disagree with or even find unethical?

Years ago, I had a long chat with Gardner Campbell, one of the smartest people in the edtech space, about the role of data and analytics. We both felt that analytics has a significant downside, one that can strip human agency and mechanize the learning experience. Where we differed was in my willingness to engage with the dark side. I’ve had similar conversations with Stephen Downes about change in education.

My view is that change happens on multiple strands. Some change from the outside. Some change from the inside. Some try to redirect movement of a system, others try to create a new system altogether. My accommodating, Canadian, middle child sentiment drives my belief that I can contribute by being involved in and helping to direct change by being a researcher. As such, I feel learning analytics can play a role in education and that regardless of what the naysayers say, analytics will continue to grow in influence. I can contribute by not ignoring the data-centric aspects in education and engage them instead and then attempting to influence analytics use and adoption so that it reflects the values that are important for learners and society.

Then, during the conference today, I heard numerous mentions of people like Ken Robinson and the narrative of creativity. Other speaking-circuit voices like Sugata Mitra were frequently raised as well. This lead to reflection about how change happens and why many of the best ideas don’t gain traction and don’t make a systemic level impact. We know the names: Vygostky, Freire, Illich, Papert, and so on. We know the ideas. We know the vision of networks, of openness, of equity, and of a restructured system of learning that begins with learning and the learner rather than content and testing.

But why doesn’t the positive change happen?

The reason, I believe, is due to the lack of systems/network-level and integrative thinking that reflects the passion of advocates AND the reality of how systems and networks function. It’s not enough to stand and yell “creativity!” or “why don’t we have five hours of dance each week like we have five ours of math”. Ideas that change things require an integrative awareness of systems, of multiple players, and of the motivations of different agents. It is also required that we are involved in the power-shaping networks that influence how education systems are structured, even when we don’t like all of the players in the network.

I’m worried that those who have the greatest passion for an equitable world and a just society are not involved in the conversations that are shaping the future of learning. I continue to hear about the great unbundling of education. My fear is the re-bundling where new power brokers enter the education system with a mandate of profit, not quality of life.

We must be integrative thinkers, integrative doers. I’m interested in working and thinking with people who share my values, even when we have different visions of how to realize those values.

Slides from my talk today are below:

Future of Learning: Digital, distributed, and data-driven from gsiemens

What’s So New about the New Atheists? – Virtual Canuck

Jon Dron's bookmarks - May 3, 2016 - 10:44

This is a nicely crafted, deeply humanist, gentle and thought-provoking sermon, given by Terry Anderson to members of his Unitarian church on atheistic thinking and values.

I have a lot of sympathy with the Unitarians. A church that does not expect belief in any gods or higher powers; that welcomes members with almost any theistic, deistic, agnostic or atheistic persuasions; that mostly eschews hierarchies and power structures; that focuses on the value of community; that is open to exploring the mysteries of being, wherever they may be found; that is doing good things for and with others, and that is promoting tolerance and understanding of all people and all ideas is OK with me. It's kind of a club for the soul (as in 'soul music', not as in 'immaterial soul'). As Terry observes, though, it does have some oddness at its heart. It's a bit like Christianity, without the Christ and without the mumbo jumbo, but it still retains some relics of its predominantly Christian ancestry. Terry focuses (amongst other things) on the word 'faith' as being a particularly problematic term in at least one of its meanings.

For all their manifest failings and evils they are used to justify or permit, religious teachings can often provide a range of useful perspectives on the universe, as long as we don't take them any more seriously than fairy tales or poetry: which is to say, very seriously at some levels, not at all seriously in what they tell us of how to act, what to believe, or what they claim to have happened. And, while the whole 'god' idea is, at the very best, metaphorical, I think the metaphor has potential value. Whether or not you believe in, disbelieve in or dismiss deities as nonsense (to be clear, depending on the variant, I veer between disbelief and outright dismissal), it is extremely important to retain a notion of the sacred - a sense of wonder, humbleness, awe, majesty etc - and a strong reflective awareness of the deeply connected, meaning-filled lives of ourselves and others, and of our place in the universe. For similar reasons I am happy to use an equally fuzzy word like 'soul' for something lacking existential import, but meaningful as a placeholder for something that the word 'mind' fails to address. It can be helpful in reflection, discussion and meditation, as well as poetry. There are beautiful souls, tortured souls, and more: few other words will do.  I also think that there is great importance in rituals and shared, examined values, in things that give us common grounding to explore the mysteries and wonders of what is involved in being a human being, living with other human beings, on a fragile and beautiful planet, itself a speck in a staggeringly vast cosmos. This sermon, then, offers useful insights into a way of quasi-religious thinking that does not rely on a nonsensical belief system but that still retains much of the value of religions. I'm not tempted to join the Unitarians (like Groucho, I am suspicious of any club that would accept me as a member), but I respect their beliefs (and lack of beliefs), and respect even more their acknowledgement of their own uncertainties and their willingness to explore them.

Address of the bookmark: http://virtualcanuck.ca/2016/04/27/whats-so-new-about-the-new-atheists/

Reflecting on Learning Analytics and SoLAR

elearnspace (George Siemens) - April 28, 2016 - 15:29

The Learning Analytics and Knowledge conference (LAK16) is happening this week in Edinburgh. I unfortunately, due to existing travel and other commitments, am not in attendance.

I have great hope for the learning analytics field as one that will provide significant research for learning and help us move past naive quantitative and qualitative assessments of research and knowledge. I see LA as a bricolage of skills, techniques, and academic/practitioner domains. It is a multi-faceted approach of learning exploration and one where anyone with a stake in the future of learning can find an amenable conversation and place to research.

Since I am missing LAK16, and feeling nostalgic, I want to share my reflections of how LAK and the Society for Learning Analytics Research (SoLAR) became the influential agencies that they now are in learning research. Any movement has multiple voices and narratives so my account here is narrow at best. I am candid in some of my comments below, detailing a few failed relationships and initiatives. If anyone reading this feels I have not been fair, please comment. Alternatively, if you have views to share that broaden my attempt to capture this particular history, please add them below.

How we got started
On March 14, 2010, I sent the following email to a few folks in my network (Alec Couros, Stephen Downes, Dave Cormier, Grainne Conole, David Wiley, Phil Long, Clarence Fisher, Tony Hirst, and Martin Weller. A few didn’t respond and those that joined didn’t stay involved, with the exception of Phil):

As more learning activities occur online, learners produce growing amounts of data. All that data cries out to be parsed, analyzed, interrogated, tortured, and visualized. The data being generated could provide valuable insight into teaching and learning practices. Over the last few years, I’ve been promoting data visualization as an important trend in understanding learners, the learning process, and as an indicator of possible interventions.

Would you be interested in participating in a discussion on educational analytics (process, methods, technologies)? I imagine we could start this online with a few elluminate meetings, but I think a f2f gathering later this year (Edmonton is lovely, you know) would be useful. (Clarence, Alec, and I tackled this topic about three years ago, but we didn’t manage to push it much beyond a concept and a blog ).

At the same time, I sent an email to colleagues in TEKRI (Rory McGreal, Kinshuk, and Dragan Gasevic) asking if this could be supported by Athabasca University. Dragan promptly replied stating that “I can say that most of the things we are doing with semantic technologies are pretty much related to analytics and I would be quite interest in such an event”. Then he told me that my plan for a conference in fall 2010 were completely unrealistic asking “[who] would be a potential participant? How we can get any audience in December?”.

Dragan and Shane Dawson, who I connected with through a comment on this blog, are two critical connections and eventually friends. Except Shane. He is mean and has relationship issues. SoLAR would not exist without their involvement. Another important connect was Ryan Baker. Ryan started the International Educational Datamining Society a few years earlier. The fact that Ryan was willing to assist in the formation of a possibly competing organization speaks volumes about his desire to have rich scientific discourse. We ended up publishing an article in LAK12 about collaboration and engagement between our fields.

LAK11
Organization was slow plodding for the first LAK conference. We built out our steering committee (defined by anyone who agreed to join) to include Erik Duval, Simon Buckingham Shum, and Caroline Haythornthwaite). We set up a Google group at the end of March on Education Analytics. The bulk of the planning for the first conference happened in that Google Group. By the end of June, I had seen the light of Dragan’s wisdom and agreed to move the conference to 2011. The LAK11 conference was held in Banff, Alberta in March. Important to note that we paid $500 for that logo. It should have come with a hit of acid.

The financials of any first event are critical. There is always risk. I’ve had events fail that cost a fair bit of money – a social media conference that I ran in Edmonton was a pleasant financial failure. For LAK11, we received financial support from Athabasca University, CEIT (University of Queensland), Kaplan, D2L, and the Gates Foundation. We generated a profit of ~$10k and that was forwarded to the organizers of LAK12 (Shane Dawson) to help seed the next conference. We didn’t have a formal organization to share in the expenses so each organizer for the first several years had to bear the financial risk. Paying past success forward made things easier for the next event. Leading up to LAK14, we were legally organized as SoLAR and took on the financial risk for local organizers.

Finding a publisher
In order to improve the scholarly profile of the conference, we pursued formal affiliation with a publisher. For many academics in Europe and Latin America, this was important in order to receive funding for travel. Dragan made numerous attempts to get Springer’s LNCS volume affiliation for the conference. The LNAI affiliation ended up being the avenue that we were suggested to pursue. Dragan put in the application on September 11, 2010. Springer stonewalled us at great length. We finally received confirmation that they would publish on July 17, 2011. Needless to say, as a professional organization, we did not want to work with a partner where that type of delay was considered acceptable. We were fortunate to connect with ACM and our first proceedings were published with them. Simon Buckingham Shum and Dragan were critical in securing this relationship, and in many ways for the academic rigour now found in LAK. I have been appropriately criticized by top researchers like Ryan Baker that the conference proceedings aren’t open. It was a decision that we made to broaden, oddly enough, access to travel funds to researchers from other countries.

My momma don’t like you
Not everyone was a fan of the idea of learning analytics. As this discussion thread on Martin Weller’s blog post reveals, there were voices of doubt around the idea of learning analytics:

Wish you luck in pursuing this Next Greatest Thing. Maybe next year’s can include the words “Mobile” “Emergent” and “Open” to broaden its hipness even further…really, really, really have been trying very hard not to make any comments since I first saw this announced early in 2010. I mean REALLY hard, because that comment above doesn’t even start to capture the amount of bullshit this smells like to me. But I am sure it will be a smashing success, a new field will have been invented, and my suspicions that there is no ‘there there’ even more unfounded. History will surely side with you George, of that I have little doubt.

Some of these doubts have become reality due to a techno-centric view of analytics, as is often captured by Audrey Watters. Interestingly, one of my first interviews on LA was with Audrey when she was writing for O’Reilly. The field has sometimes moved distressingly close to solutionism and Audrey has rightly turned toward criticism. We need more criticism of the field – both from researchers and practitioners and I find people like Audrey who are bluntly honest are essential to progressing as a research domain.

LAK11

Leading up to LAK11, I organized a LA MOOC (haha, MOOCs were so cool back then). This served as an opportunity to get people onto the same page regarding LA and to broaden possible attendance to the conference. LAK11 was fairly small with about 100+ people in attendance.

About two days before LAK11, I sent out an email stating:

We are expecting a week of nice weather – beautiful for strolling around Banff and enjoying the amazing scenery. Weather in the Canadian Rockies can be a bit temperamental, so it’s advised to pack clothing for the possibility of some chilly days.

Well, I lied. We were expecting -2C. We got -35C. Freaking cold for those of you that haven’t experienced it before. Also, it generated exceptionally high attendance rates as few people wanted to be outside.

The conference agenda (here) reveals the significant contributions of early attendees. While my first email to colleagues included my blogging network (Stephen, Alec, Dave, Martin) the LAK conference itself resulted in me engaging with a largely new social network disconnected from much of what I had been doing with connectivism and MOOCs, though there were points of overlap. In many ways, I see both MOOCs and LA as an extension of my thinking on connectivism as my more recent focus on the social, affective, and whole person aspects of learning.

Expanding and Growing

Following LAK, we spent some time organizing and getting our act together about what we had created. Over time it became clear that we needed an umbrella organization – one that was research centric – to guide and develop the field. On Oct 2, I sent the following email to our education analytics Google Group. I include the bulk of it as it reflects our transition to SoLAR – the Society for Learning Analytics Research.

With interest continuing to grow in learning analytics – at institutional, government, and now entrepreneurial levels – some type of organization of our shared activities might be helpful.

Based on the sentiment expressed at the post-LAK11 meeting on developing a group or governing body for learning analytics, a few of us have been working on forming such an organization. In the process, I’ve had the opportunity to meet and chat with several SC members (Erik Duval, Dragan Gasevic, Simon Buckingham-Shum) on different organizational structures that might serve as a model. We’ve done enough organizing work, we think, to open the discussion to a broader audience…namely the LAK SC (that’s you).

We’ve decided on Society for Learning Analytics Research (SoLAR) as a name for our organization. The term was coined by Simon Buckingham-Shum (program co-chair, LAK12). Obviously, we would like to invite existing LAK conference steering committee members to be a part of it. Are you interested in transferring your SC role to SoLAR? If so, please provide an image of your lovely head as well as a preferred link to your site/blog/work and a few sentences about how awesome you are.

We have also reserved the domain name: solaresearch.org for our society.

We envision SoLAR as an umbrella group that runs the LAK conference, engages in collaborative research, work with research students, scholar exchange, applies for grants, provides access for researchers to broader skill sets than they might have on their team, produces publications, etc. SoLAR is expected to be an international society/network where learning analytics researchers can connect, collaborate, and amplify their work. It is possible that SoLAR may occasionally provide feedback on policy details as states and provinces adopt LA. Maybe that’s a bit too blue sky…

Over the next few months, various documents will be drafted, including a charter, mission, and decision making process for SoLAR. For example, how do we elect officials? How do we decide where the conference will be held next year? etc. We (currently: Shane, Simon, Dragan, Caroline, John (Campbell), and myself) recommend that an interim SoLAR leadership board – the group just listed – be tasked with developing those documents and sharing with the SoLAR steering committee for comment and approval. Once this interim leadership has completed its organizing work, we will then open the process to democratic elections based on SC and society membership. We haven’t yet determined the criteria for being a SoLAR member (fees? attend a conference? invite only?) or how long SC members serve. Currently we are a self-organized group. Everyone is here either by an invite or expressing interest. Laying a clear, democratic, foundation now will help to position SoLAR as a strong advocate for learning analytics in education.

LAK12 was a tremendous success. Shane was a spectacular host. It became clear to us that interest was high in LA as a research activity and practice space. We arranged a meeting following the conference where we brought in ~50 representatives from funding agencies, corporations, and government officials. The intent was to discuss how LA might evolve as a field, what was needed to broaden impact, and how grant and foundation funding might assist in improving the impact of work.

Following LAK12, SoLAR engaged in a series of initiatives to improve the sharing of research and increase support for faculty entering the field. We had spent time in late 2011 discussing a journal, but didn’t get much traction on this until 2012. In early April, Dragan and Simon had put together an overview of the journal theme and it was approved by SoLAR executive and announced at LAK12. Dragan, Simon, and Phil were the first editors. Simon stepped down shortly after it started and Shane stepped in. Shane and Dragan have been the main drivers of the Journal of Learning Analytics.

A mess of other activities were started during this time including workshops at HICCS (organized by Dan Suthers, Caroline Haythornthwaite, and Alyssa Wise), Storms – local workshops, Flares – regional conferences, events affiliated with other academic organizations such as learning sciences. Basically, we were putting out many shoots to connect with as many academics and practitioners as possible.

One activity that continues to be highly successful is the Learning Analytics Summer Institute (LASI). In August of 2012, I sent Roy Pea from Stanford an email asking if he’d be interested in joining SoLAR in organizing a summer institute. We felt the Stanford affiliation signalled a good opportunity for SoLAR. Roy agreed and we started organizing the first event.

Roy and I didn’t connect well. Roy felt I was too impatient. I was pushing too hard to get things organized. Academic timelines always give me a rash. We managed to secure significant funding from the Gates Foundation and the first LASI was a success, in no small part do to Roy’s organizing efforts. After LASI, we decided to move the institute to different locations annually – a perspective that I strongly pushed as I didn’t want LASI to be affiliated with only one school. Due to my head bumping with Roy and suggestions to host the next LASI elsewhere (Harvard it turned out), I was written out of the final learning analytics report that he produced for the Gates Foundation on LASI. Academics are complex people!

A list of LASI, Flare, and LAK events can be found here.

Getting the finances right

Follow LAK11, we started exploring university subscriptions to SoLAR. This was informed by Shane’s thinking on paying an annual fee to be involved in groups such as NMC or EDUCAUSE. We set up a series of “Founding Universities”, each committing about $10k to be founding members. This served to be a prudent decision as it gave us a base of funds to use for growing our membership and hosting outreach events. Our doctoral seminars, for example, are funded and supported by these subscriptions.

We had strong corporate support as well with organizations like D2L, Oracle, Intel, Instructure, McGraw-Hill, and others providing support for the conferences and summer institutes. Corporate support has proven to be valuable in running successful conferences and enabling student opportunities. We decided to stay away from sponsored keynotes so as to ensure academic integrity of our conferences. I continue to be disappointed that we have been largely unable to get support from pure LA companies such as Civitas and education research arms of companies such as SAS. The students that we graduate grow the field. LA companies benefit from field growth. Or at least that’s my logic.

The founding members and current institutional partners are listed here. Each one has been central to our success.

Enter Grace
Grace Lynch joined SoLAR work in 2012. During LASI at Stanford, she pitched the idea of hiring someone to do administrative and organizing work with SoLAR. Up to that point, we were run by academics devoting their time. The work load was increasing. And those who know me also know my attention for detail is somewhat, um, varied. Hiring Grace was the best decision that I made in SoLAR. She was able to get us organized, financially and administratively. The success of SoLAR and LAK and LASI events is due to her effort. I frequently hear from others who first attend a SoLAR event about how impressed they are with the professionalism and organization. That’s Grace’s doing.

Engaging with with big ideas
During LAK11, we expressed our goals as an association:

Advances in knowledge modeling and representation, the semantic web, data mining, analytics, and open data form a foundation for new models of knowledge development and analysis. The technical complexity of this nascent field is paralleled by a transition within the full spectrum of learning (education, work place learning, informal learning) to social, networked learning. These technical, pedagogical, and social domains must be brought into dialogue with each other to ensure that interventions and organizational systems serve the needs of all stakeholders.

In order to serve multiple stakeholders, beyond LAK/LASI/Journal, we also held leadership summits and produced reports such as Improving the Quality and Productivity of the Higher Education Sector: Policy and Strategy for Systems-Level Deployment of Learning Analytics.

We have also been active in helping to shape the direction of the field by advocating for open learning analytics – a project that is still ongoing.

Losing Erik Duval
When one’s personal and professional worlds come together, as they often due in long term deep collaborative relationships, individual pain becomes community pain. Erik Duval, a keynote speaker at our first LAK conference, passed away earlier this year. He shared his courageous struggle on his blog. Reading the Twitter stream from LAK16, I am encouraged to see that SoLAR leadership has set up a scholarship in his honour. His contributions to LA as a discipline are tremendous. But as a friend and human being, his contributions to people and students are even more substantive. You are missed Erik. Thank you for modelling what it means to be an academic and a person of passion and integrity.

What I am most proud of
LAK is a unique conference and SoLAR is a special organization. I have never worked with such open, non-ego, “we’re in it because we care”, people in my life. I wish that future leadership also has the pleasure of experiencing this collegial and collaborative spirit. Our strengths as a community are in the diversity of our membership. This diversity is reflected in global representation and academic disciplines. As a society, we have better gender diversity than what is found in many technical fields. It is not where it should be yet. And the progress that we have made is due to the advocacy of Caroline Haythornthwaite and Stephanie Teasley. The current executive is a reflection of that diversity.

What’s next
At LAK15, I stepped down as founding president of SoLAR. I felt like it was time to go – I’ve seen too many fields where a personality becomes too large for the health of the field. We’ve always emphasized that SoLAR should be a welcoming space where individuals from different disciplines and research interests can find a place to play, to work, to connect. In order for this to happen, fluid processes for getting opinionated people out and new ideas in is important!

My attention is now primarily focused on two areas: developing LA as a field in China and increasing the sophistication of data collection. Recent visits to China, Tsinghua University and Beijing Normal University as well as an Intel LA event in Hangzhou in fall, have made it clear to me that LA is robust, active, and sophisticated in China. In many of the projects and products that I’ve seen, they’re well ahead of where the current state of publishing in English suggests that we are. In conversations with colleagues at Tsinghua, we have agreed to make the development of a research network and academic community in China a key priority.

Secondly, at LINK Research Lab, we have turned our research attention to wearables and ambient computing. As I stated in my keynote at LAK12, increasing and improving the scope and quality of data collection is needed in order to improve the sophistication of our work as a field. Physiological and contextual data will assist in advancing the field, as will a greater focus on social and affective aspects of learning. Cognition is only one aspect of learning. As a consequence, focus on affective, social, meta-cognitive, and process and strategy is required. To get there, we need better, broader data.

Well, that’s my reflection how we got here with LA and SoLAR. What have I missed?

Humpback whale in English Bay

Jon Dron's blog - April 10, 2016 - 08:37

Damn it, I didn't bring my big camera. The camera in my phone does not do this justice...

There is something genuinely awesome - in the original sense of the word - about being out on the water in a boat that is smaller than the creature swimming next to you. The humpback whale swam around us for about 40 minutes before moving on. Somewhere between 10 and 20 seals hung around nearby hoping for some left-overs, as did a small flock of seagulls. We tried to keep our distance (unlike a couple of boats) but the whale was quite happy to swim around us.

Hello world!

Connectivism blog (George Siemens) - March 8, 2016 - 07:11

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

McDonald's as a learning technology

Jon Dron's Landing blog - August 2, 2012 - 14:49

Whenever I visit a new country, region or city I visit McDonald's as soon as I can to have a Bic Mac and an orange juice. Actually, in Delhi that turns into a Big Raj (no beef on the menu) and in some places I substitute a wine or a beer for the orange juice, but the food is not really important. There are local differences but it's pretty much as horrible wherever you go.

I inflict this on myself because The McDonald's Experience should, on the whole, be a pretty consistent thing the world over: that's how it is designed. Except that it isn't the same. The differences, however, compared with the differences between one whole country or city and another, are relatively slight and that's precisely the point. The small differences make it much easier to spot them, and to focus on them, to understand their context and meaning. Differences in attitudes to cleaning, attitudes to serving, washroom etiquette, behaviour of customers, decor, menu, ambiance, care taken preparing or keeping the food etc are much easier to absorb and reflect upon than out on the street or in more culturally diverse cafes because they are more firmly anchored in what I already know. Tatty decor in McDonald's restaurants in otherwise shiny cities speak worlds about expectations and attitudes, open smiles or polite nods help to clarify social expectations and communication norms. Whether people clear their own tables, whether the dominant clientele are fat, or families, or writers, whether it's a proletarian crowd or full of intelligentsia or a place that youth hang out.  Whether people smoke, whether they drink. How loud the music (if any) is playing. The layout of the seating. How people greet their friends, how customers are greeted, how staff interact. How parents treat their children. There's a wide range of different more or less subtle clues that tell me more about the culture in 20 minutes than days spent engaging more directly with the culture of a new place. Like the use of  the Big Mac Index to compare economies,  the research McDonald's puts into making sure it fits in also provides a useful barometer to compare cultures.

McDonald's thus serves as a tool to make it easier to learn. This is about distributed cognition. McDonald's channels my learning, organises an otherwise disorganised world for me. It provides me with learning that is within my zone of proximal development. It helps me to make connections and comparisons that would otherwise be far more complex. It provides an abstract, simplified model of a complex subject.

It's a learning technology. 

Of course, if it were the only technology I used then there would be huge risks of drawing biased conclusions based on an outlier, or of misconstruing something as a cultural feature when it is simply the result of a policy that is misguidedly handed down from a different culture. However, it's a good start, a bit of scaffolding that lets me begin to make sense of confusion, that makes it easier to approach the maelstrom outside more easily, with a framework to understand it.

There are many lessons to be drawn from this when we turn our attention to intentionally designed learning technologies like schools, classrooms, playgrounds,  university websites, learning management systems, or this site, the Landing. Viewed as a learning technology about foreign culture, McDonald's is extraordinarily fit for purpose. It naturally simplifies and abstracts salient features of a culture, letting me connect my own conceptions and beliefs with something new, allowing me to concentrate on the unfamiliar in the context of the familiar. Something similar happens when we move from one familiar learning setting to the next. When we create a course space in, say, Moodle or Blackboard, we are using the same building blocks (in Blackboard's case, quite literally) as others using the same system, but we are infusing it with our own differences, our own beliefs, our own expectations. Done right, these can channel learners to think and behave differently, providing cues, expectations, implied beliefs, implied norms, to ease them from one familiar way of thinking into another. It can encourage ways of thinking that are useful, metacognitive strategies that are embedded in the space. Unfortunately, like McDonald's, the cognitive embodiment of the designed space is seldom what learning designers think about. Their focus tends to be on content and activities or, for more enlightened designers, on bending the tools to fit a predetermined pedagogy. Like McDonald's, the end result can be rather different from the intended message. I don't think that McDonald's is trying to teach me the wealth of lessons that I gain from visiting their outlets and, likewise, I don't think most learning designers are trying to tell me:

  • that learning discussions should be done in private places between consenting adults;
  • that it is such a social norm to cheat that it's worth highlighting on the first page of the study guide;
  • that teachers are not important enough to warrant an image or even an email link on the front page;
  • that students are expected to have so little control that, instead of informative links to study guide sections, they are simply provided with a unit number to guide their progress;
  • that the prescribed learning outcomes are more important than how they will be learned, the growth, and the change in understanding that will occur along the way.

And yet, too many times, that's what the environment is saying: in fact, it is often a result of the implied pedagogies of the technology itself that many such messages are sent and reinforced. The segregation of discussion into a separate space from content is among the worst offenders in this respect as that blocks one of the few escape routes for careful designers. Unless multi-way communication is embedded deeply into everything, as it is here on the Landing, then there is not even the saving grace of being able to see emergent cultural behaviours to soften and refine the hegemonies of a teacher-dominated system.

Like McDonald's, all of this makes it far more likely that you'll get a bland salty burger than haute cuisine or healthy food.