The Third Space


I’ve written about or mentioned a certain bugbear in many posts over the past few years. It’s something that, to quote great thinker and philosopher Peter Griffin, ‘really grinds my gears’.

I’m attending ALTC in Liverpool this week, and this morning, sitting at a 20 minute session looking at E-Assessment the thing that grinds those gears of mine was mentioned by somebody else. Finally, I have discovered that my frustrations are not isolated. Indeed, I echoed what had been said as a Tweet and this has become one of my most retweeted Tweets,  retweeted, I suspect, by others who feel the same as me.

Here’s the thing. I work at a university where my role includes, among other things, the development of academics’ teaching skills via staff training sessions To an extent, I teach academics how to teach. There is an advisory element to my role, and I can often be found sitting with academics and suggesting teaching methods and tools that will help them to better embed learning technology and digital literacy into their programmes. I sit at curriculum planning meetings and highlight sessions and topics within a range of programmes that lend themselves to technology-based methods such as blended learning and the flipped classroom, and, of course, learning technology. I have even helped to shape the direction my institution takes regarding learning technology.

But, because I am aligned to the professional services sphere as opposed to that of the academic, I am officially, as I have learned today, an occupant of the ‘Third Space’. A space that is both academic and supportive yet specifically neither, where members of professional support resent me for ‘hanging out’ with academics, but academics take no notice of me as I don’t have their power, influence or ‘voice’. And that’s why I can know everything about, say, lecture capture, from how to schedule a lecture capture recording through to which rooms are equipped with lecture capture facilities and I can quote academic papers that discuss how and why to use lecture capture to improve student retention, satisfaction and teaching…but an academic will be given the role ‘academic lead for lecture capture’, and, inevitably, have absolutely no idea what how or why lecture capture is a thing. They will then send emails out to every other academic misinforming them of lecture capture, getting every aspect of it wrong, while I silently clear up their mess behind them…but what they say ‘goes’ just by dint of them being an academic. Based on this logic, if an academic says that we have to use a *’Speak and Spell’ as a voting pad or iPads as chopping boards, then that’s what will happen…because they are an academic and what they say is law.

Rant over. And rathe than being emotive because I’m in the thick of this situation, I want to look at this from a more objective and, dare I say it, academic stance. Let’s begin with the work of Celia Whitchurch, the academic who first coined the phrase ‘Third Space’.

In 2010, Whitchurch published a paper called Optimising the Potential of Third Space Professionals in Higher Education. Whitchurch’s work sought to develop her concept of a “Third Space” between professional and academic spheres of activity in higher education. These were represented in the paper by three processes described as Contestation, Reconciliation and Reconstruction. Whitchurch suggested that successful navigation of Third Space involved being able to work through the challenges and tensions bought about by the characteristic of Contestation, to build collaborative relationships via perceptions of added value; the characteristic of Reconciliation, and to construct new forms of plural space during Reconstruction.

There’s a PDF copy of the paper in full here.

The study saw the complex dimensions of Third Space as an emergent space in its own right, and as a concept to be applied to higher education institutional environments to illustrate “another mode of thinking about space that draws upon… traditional dualism, but extends well beyond [it] in scope, substance and meaning.” And that, I believe, is a solid starting point. At the moment I occupy a kind of ephemeral, limbo-like space where I am neither one thing or another, yet somehow both. Giving this space a name makes it real, and gives it substance.

By developing the concepts of Contestation, Reconciliation and Reconstruction, the study has progressed understandings of roles and relationships in Third Space, including the creation of new spaces and identities. There is a sense of resistance and struggle, via the Contestation process, as a legitimate part of identity construction and working practice. It therefore offers a way of acknowledging the more challenging aspects of Third Space, at the same time as those that are more developmental and creative, providing a tool for understanding increasingly complex relationships. But how long will this take? Whitchurch wrote her paper in 2010 It’ 2017 now, and I hadn’t heard of the term Third Space until this week. Why has this not become established – recognised – after all of this time?

Third Space, Whitchurch posits, demonstrates that a greater emphasis on relationships than on organisational structures can reduce checks and balances and leave some staff, particularly those who are less experienced, feeling vulnerable. Feedback suggested that there is also a sense in which Third Space could become all things to all people, or a default position for people who feel that they do not ‘fit’ the formal structures, possibly with a hint of the ‘subversive’. It could also foster a sense of a lack of identity if an individual was moving from project to project as a ‘project manager’, especially if they did not have a title that linked them into established institutional structures. And I see this. We may rally against it, but humans like to pigeon hole everything. You are an academic, ergo you do this, this and this. You are professional support, so you do that, that and that. It keeps things clean, manageable and clearly delineated. Blur the lines – create that Third Space – and the world may end. But in 2017, aren’t we all now so used to having to perform multiple roles outside of our official (and ever-growing) job title that such delineation becomes too rigid and, effectively, keeps us in a stranglehold?

The study suggests that optimising the potentials of those working in Third Space is likely to be a joint process, with (and here’s the kicker) responsibility on institutions to recognise and respond to changes that are occurring, and an onus on individuals to ‘educate’ their institutions about how Third Space might be used most advantageously.

Moreover, although Third Space working has implications for the relationship between institutions and their staff, this does not necessarily mean a major shift in approach. It may, rather, be a question of being creative within existing mechanisms, so as to give credit for new forms of activity. For instance, Third Space activity can be supported by more flexible employment packages for individuals who occupy a broader range of roles than hitherto, and develop careers that do not follow a traditional academic or professional pattern.

What seems clear, however, is that relationships rather than structures are at the heart of the way that Third Space works for individuals and institutions. Both, therefore, may wish to review the concept of Third Space, the processes associated with it, and ways in which they might make it work for them.

Yes please. By being confined within a binary system, I am unable to do the best I can for those whom we all serve in HE – the students. And surely that’s wrong?

*For anyone under 40, this is a Speak and Spell:


You’re Tired!


It was on this very day 10 years ago that markets woke up to a problem when French bank BNP Paribas halted redemptions, or funds claimed by investors, on three investment funds. It was this act that triggered what has become the UK’s most recent and largest financial crash, but conversely, it was also exactly ten years ago today that Apple launched its first iPhone. Both of these anniversaries, though not intrinsically linked, have given me cause to think about where we are a decade later, and have prompted this post.

From 2008 onwards, the word recession became a part of our everyday life and language, and as budget cuts began to take hold, so the educational zeitgeist – be it around compulsory or post-16 teaching and learning – started to be re-framed around two key words: innovation and enterprise.  It makes practical sense to think in these terms if the country’s economy is flat-lining and the need to do more with less (innovation) along with the need to try to make a living at a time where the financial ground is barren (enterprise) become survival. I've written about innovation in education before, so if you want to check my earlier post out before continuing, do go ahead.

If I were to link these two words with cognitive processes, then creativity of thought, lateral thinking, and imagination, would be fundamental.

A decade ago, the iPhone was the epitome of innovation. It was, and still is, the 'must have' smartphone, it's still ubiquitous, it's rather beautiful, it has, effectively, replaced the office, but now that it's ten years old and developers' ideas are running out, each brand new iteration becomes less groundbreaking and certainly less innovative.

As a member of the conference panel for the next International Conference of Education, Research and Innovation in November (ICERI 2017), I’ve been reading through a selection of papers that have been submitted from educators, researchers and technologists from around the world. Though it has been a genuinely interesting and rewarding experience, something has been niggling me, and I can't shake it off.  And it's not just something that's coming from some of the submissions I've been reading: I've noticed it locally too. Things that are being mooted as being innovative are, on closer inspection,  no more than the re-hashing or re-branding of concepts, methods, and processes that have been around for ages. At the risk of sounding like 'Irritated of Nantwich', I'm going to suggest that filming clinical skills procedures and then making them available for students to view online is not innovative, asking teachers to curate and then reflect upon their CPD by way of an online portfolio or blog isn't a new idea, and recording audio feedback to students rather than typing assignment feedback isn't enterprising. These are all genuine examples of practice that has been labelled as being innovative and enterprising and that have been on my radar for a few weeks. The thing is,  they are also all examples of practice that were on the same radar a decade ago.

So have we run out of ideas?

I don't think so. I'd like to think that our collective imagination is limitless. But I do think that because we live and work in a daily state of emergency – where our day in the office amounts to little more than fire fighting, and our home lives are increasingly fraught and lived in the shadow of political unrest, inequality, lack of resources and a race to he bottom – it can be hard to find the space, the time and the right frame of mind in which to be innovative or enterprising. And so, because we are told that we must be enterprising and innovative in order to raise our institution's profile and remain relevant, but we can't reasonably be inspired on command, we re-package, rename and rebadge projects that have already been done, hoping that our audience likes the emperor's newly-tailored suit. Maybe if we had time to breathe, and vitally, to be allowed to take risks and make mistakes, we would be able to be truly innovative. Sadly, I can't see a time when this will be allowed to happen.

But I am determined to finish on a positive note. A cafe down the road from me not only makes the best coffee in Cardiff, it has been innovative and enterprising by doing one small thing. You know those cardboard coffee sleeves that you get in Starbucks and Costa? Visit the cafe, buy one of their fantastic coffees, then for one pound, you can buy a reusable coffee sleeve made from material. Simple. Brilliant. And it saves trees too.

Game On! (Part 2)

A warning to other bloggers out there: this is what happens when you promise your audience that you are going to write something groundbreaking based around a very specific subject, (and one you arrogantly think is yours and yours alone to write about), but instead you start prevaricating and writing blog posts about other things instead.

I have mentioned a couple of times in recent weeks my need to write about a theory that has been buzzing around my head for a while now: that console gaming is therapeutic. Well, an article from The Conversation has just popped up in my Facebook feed, and it says exactly what I should have done as few weeks ago. It would appear that my groundbreaking theory isn’t as groundbreaking as I had assumed. However, finding out that others have the same theory as you is comforting, this particular article makes for concise and genuinely interesting reading, had my head nodding enthusiastically in agreement (and my teeth grinding in frustration at my own laziness), and vitally, goes on to cite academic research published in April 2016.  The research, carried out by Leonard Reinecke of the Johannes Gutenberg-Universität Mainz, states that when video games are systematically used after exposure to stressful situations and strain, that recovery experience is a significant facet of the gaming experience (Reinecke, 2016). Console gaming as therapy. Boom!

In terms of my own recovery from stressful periods, I concur absolutely. Were it not for Dragon Age II at an incredibly stressful period of my professional life, I would probably not be in the positive frame of mind I’ve been able to maintain for a few years. I’d also be lucky to be able to work at all. Skyrim got me through equally tough periods in my personal life, and on a smaller scale, I still always have a ‘go-to game’ in my PS4 in case I’ve had a trying day and need to let off steam and ‘ground myself’ again. This week, it’s mostly Elder Scrolls Online, though reading this back to myself, I really do need to find a genre of game that doesn’t involve swords, sorcery or picking flowers in order to make potions…


Games and Recovery: The Use of Video and Computer Games to Recuperate from Stress and Strain (PDF Download Available). Available from: [accessed Jul 10, 2017].

‘Isn’t she in Game of Thrones?’

Darksansastark…said my erstwhile partner when he caught me clapping my hands with same level of enthusiasm a horse-mad 4 year old may display on receiving a pony for her birthday.

I’ll go back a little. It was the 6th July, and I had just received an email from Linden Labs informing me that I had been selected to be among the first to create ‘social VR experiences’ with Sansar, Linden’s virtual world for the virtual reality generation.

It’s been a long time since I was at the forefront of anything techy, so my excitement was entrenched in that little part of me that wants to try everything before anyone else does. The fact that it took me ages to log in, my avatar was about as customisable as a lump of coal, has the face of a corpse and walks like she has done a number 2 in her pants is nothing. Whatever happens, I was one of the first people to log into Sansar. And. most importantly,  as a developer, I can build my own space. Ladies and gentlemen, let me introduce you to:


Oh, we learning technology types may scoff at how far behind we think our institutions are compared to the rest of the world, but I’m pretty certain that Cardiff University is (probably one of) the first HE institution(s) that has a presence in Sansar,  though how long it will last, I don’t know.

So how does it look and how does it feel? Graphically, Sansar is so advanced it makes Second Life look like an early Sega Megadrive game, and the quality of the audio is just fantastic: like being in a cutting – edge cinema. I’m even adoring the font style used for messaging (I have thing about fonts. I may need help). But it’s not all beer and skittles, and at the moment it’s really just a triumph of style over substance. Because it’s so new it’s frustratingly limited, it’s laggy, and even the usually simple process of trying to move objects around is a pig. So here we have exactly what he had 12 years ago when Second Life was introduced – something with real promise, and a glimpse of a future that I really want go be a part of, but with more bugs than an NHS hospital. As with Second Life’s early days, Sansar delights and frustrates in equal measure, and in this iteration, the virtual world can’t even be used with VR headsets as yet – despite this being the very premise on which it was founded.

I don’t care. I’m still really excited. Here’s a screenshot of my avatar in her new Cardiff University ‘home’ to tantalise and delight you:


The way to academics’ hearts is through their minds

I presented the following abstract at Cardiff University’s Learning and Teaching conference on Tuesday. And no, I haven’t forgotten about those ‘gaming is the future’ blog posts I keep promising; other things keep getting in the way!

When it comes to Technology Enhanced Learning (TEL) there has long been an emphasis on demonstrating how to use digital tools in staff development sessions. However, there is little evidence of other staff development sessions examining the methods and models TEL. Institutional directives request that staff use a Virtual Learning Environment (VLE) and offer training on the mechanics of uploading documents and renaming folders, but they do not explain the methodologies or pedagogic models behind using a VLE. Other directives require that academic staff embed digital literacy skills into their teaching practice in order to hone their students’ own skills. Academic staff are rarely asked if they know what digital literacy means themselves, hoping, it would seem, that the meaning of digital literacy is learnt and passed onto students through a process of osmosis. I would suggest that if academics and teachers work from the taxonomy of pedagogy it is from this taxonomy that staff development is approached.

Repeated reviews into the professional development of teachers and ways to diminish their fear of technology have recommended that staff are given substantial time if they are going to acquire and, in turn, transfer to the classroom the knowledge and skills necessary to effectively and completely infuse technology to curriculum areas. (Brand, G.A., 1997). However, lack of time is just one issue, and constant emphasis of the need to ‘find time’ merely distracts from the proverbial elephant in the room: that academics are ‘scared’ of technology because they aren’t told how it fits a familiar pedagogic framework. Learning technologists are expert at explaining how to use a tool, but often miss out the pedagogical value of the tool, assuming that the teacher will think of a use for it.

In response to this, I currently run sessions for teachers and academic staff looking at methods and models such as the flipped classroom, Personal Learning Networks, blended learning, digital literacy, the benefits of online communities of practice, and the differences between pedagogy, andragogy and heutagogy. We have debated at length Prensky’s notion of the Digital Native against that of the Residents and Visitors model espoused by Dave White. We have looked at the psychology behind the online learner and their need to feel part of a group. When staff begin to understand these theories and methods, they feel better placed to choose tools that are appropriate to their curricula, their students and to relevant assessment process.

I would suggest that there is a real need to do more of this. If academics can see things from their particular (and familiar) perspective, they will see what tools work best and then, if needed, be taught how to use it.

Technology often feels like something that is being ‘done’ to people via institution-wide directives, and not something that they can do themselves. It is now 2017, so the time has come for a change in thinking.


Brand, G.A., (1997), Training Teachers for using Technology, Journal of Staff Development, Winter 1997 (Vol 19, No. 1)

Prensky, M., (2001), Digital Natives, Digital Immigrants, located at:,%20Digital%20Immigrants%20-%20Part1.pdf, date accessed: 22nd February, 2017

White, D.S. and Le Cornu, A, (2011), Visitors and Residents: a New Typology for Online Engagement, located at:, date accessed: 22nd February, 2017

Figuratively Speaking

9hjVhuxI often like to think of myself as the Queen of the Cliche or Dark Mistress of the Analogy. Not in terms of any Stephen Fry-esque use of language, but because I’ve noticed that it’s how I communicate, by and large, on a professional basis. However, having read the following post through before pressing ‘Publish’ I’m going to back-pedal on that first sentence a bit. I’m not as cool as I think I am, and I sound more like Theresa May by the week, with her ‘strong and stable magic money tree’ soundbites and the way her limbs squeak if Team Maybot have forgotten to squirt some WD40 into her joints.  Mine just squeak because I’m old. But let’s return to the subject at hand and my trotting out of paraphrased stock-phrases ad nauseam. Here’s some examples I have used in the last week alone:

‘You can lead a horse to water, but you can’t make it drink, And actually, you can’t even lead it to water half the time.’

(Translation: You can tell people about technology, but you can’t make them use it. And much of the time, because they are just too busy or too frightened or too uninterested in technology in the first place, you won’t even get the opportunity to tell them about it.)

‘If you build it, they will NOT come.’

(Translation: If you knew your students had set up their own Facebook group for your course, why did you then set up a group for them on Yammer, and why are you now complaining about how technology doesn’t work because they aren’t using it?)

‘We can’t just switch off the Internet! It’s too late for that!’

(Translation: I know you felt more in control when you were in the ruler of the lecture theatre and in charge of the overhead projector and acetates, but it’s 2017, the world has changed, and our students really do expect more than that, so please try to that accept that and let me help you to move on.)

That last one I used just yesterday, at a programme planning meeting. In 2018 we’re going to be offering some new modules at postgraduate level, and I had been invited by the module directors to deliver a presentation looking at how and why we need to work together to embed technology into these new modules.

Before doing my bit,  the marketing team were talking about how to make these modules more sellable both internationally and locally. ‘What are the School’s USPs?’ asked the team. A member of the teaching team stuck their hand up to offer a suggestion.

‘That we use traditional teaching methods’ said my colleague, ‘and none of this blended learning stuff we keep hearing about.  My entire course is taught completely face to face in the lecture theatre, and I think that this is what my students want and makes my course unique.’

So. Despite a swathe of comments to the contrary made from NSS respondents, I realised again that I am still wading through wet cement and fighting the same battle I’ve been trying to fight for almost 15 years: technology versus tradition. A million questions, suppositions and accusations clamoured for recognition at the front of my brain hole:

‘Thank God your course is unique…but do you think that’s what they want? Have you ever actually asked them…or are you are scared of technology and too proud to admit it…or are you are scared of technology and terrified of being replaced by a robot…or are you are so close to retirement that you just want to carry on coasting along for another 2 or 3 years without being hassled…or are you completely uninterested in technology, and assume that because you are, everyone is?’ And what is it that I haven’t done because clearly I have failed you if you feel this way?’

I delivered my presentation a little later, about how blended learning, e-learning, and micro learning could give these new modules an edge then talked about how respondents to student surveys are crying out for the same things – more lecture capture, a more organised and up to date Virtual Learning Environment, and…yes…because it’s too late to switch off the internet…more technology enhanced learning and an end to didactic lecture-theatre based content 8 hours a day, 5 days a week.

I’m not sure whether my message got through, because it’s hard to be heard when the cement mixers keep depositing their warm sludge around your ankles wherever you go. 😉

(BTW: I’ve not forgotten about my ‘why console gaming is great’ series of posts, it’s just that this cropped up and I needed to write it down.)


Game On! (Part 1)

This slideshow requires JavaScript.

I wanted to write a post about video gaming and how, not only has it been a source of entertainment to me for over 40 years, but, almost as a by-product, how it has also been a tool for therapy and for education. I then realised it would make for a very long post.  Instead I am going to chop it up into digestible pieces and attempt to write a regular series of thematic posts. It makes sense, sequentially, to make the first one an overview of sorts, and to set the scene. So here it is:

Introduction (or: ‘Coming to think of it, are Generation X the original gamers?’)

I first ‘owned’ a video game console in 1978. I was 8 years old and my parents ran one of the first pubs in the country to have a Space Invaders cabinet standing in the corner of the public bar. It terrified and delighted me in equal measure. After school, I would stand at my dad’s side and delightedly watch him zapping those sideways-creeping, crab-like aliens, the music becoming quicker and more ominous as more were destroyed and their rampaging across the screen became more desperate. This he would do at opening time, so fairly often, before I was sent upstairs and away from the soon-to-be smoke-filled haze of an adults-only environment (and how many hyphens do I need to construct a sentence, eh readers?), the first  customer of the evening would wander in and dad would have to leave the game mid-level, asking me to take over until all three lives were lost (as they were within about 15 seconds. I was seven, had no fine motor skills or coordination and was too scared to do anything other than panic, stand there doing nothing and die immediately), and that’s when the game would terrify me. Five minutes of excitement, 15 seconds of terror, then upstairs quickly for crispy pancakes, Coronation Street, bath and bed.

Via stints in my teenage years with a Commodore 16 and a few lacklustre attempts at playing Super Mario and Sonic the Hedgehog in my early twenties (which, ultimately, were both games about testing the player’s timing rather than problem solving, and didn’t interest me in the slightest), video gaming had fallen so far off my radar it was now on Mars. It was only when, in my mid-to-late twenties, to silence my console-addicted husband of the time who kept nagging me to ‘just have a go’, that I picked up a joypad, stuck Tomb Raider on his Play Station and my world changed.

I’m 47 now. I own an Xbox 360 and a PS4, and I still play Tomb Raider games, but my tastes have changed. I’m currently playing Elder Scrolls Online (so perhaps should have realised years ago when I was addicted to Fighting Fantasy books that role playing games would become my go-to genre). This year alone I have dedicated what amounts to hundreds of hours of free time to Final Fantasy XV, and a remastered version of Skyrim that I first completed 6 year ago when it was originally released on an older system. Last year it was Fallout 3, Fallout New Vegas and Fallout 4. Though thanks to Donald Trump, I can no longer play these without wincing. I’m a sucker for a few hours on Silent Hill too. But I have to play with the lights on.

You might be shaking your head in disgust now. All those hours hunched over a console, staring at a television, when you could be doing something healthier / more sociable / ‘useful’!  Tsk!

Over the next few posts I hope to be able to change your mind or, if you are a fellow gamer, confirm what you have been suspecting for a while. Either way, I’ll be talking about why I think gaming is not the product of a misspent youth, not just for ‘sad people’, and hasn’t, as yet, turned me into a thug or a terrorist. I will also be suggesting that gaming can be used as therapy and looking at why I think console gaming may have value as a teaching and learning tool.

A bit on the side…

51ENCRCE3FLThat got your attention, eh? 😉

I’m going to start this post by travelling back in time to about 20 years ago. Here, at the tender age of 27, my career in education started and I became an adult literacy tutor.

I bloody loved it, because everything I had been immersed in to that point had been about words and language. And yes, we take it for granted that parents and school teachers will teach children how to read, but there have always been adults who, for one reason or another, have fallen through the net, and it was them that I wanted to reach out to.

As my career evolved, it moved away from adult literacy towards teacher training, then away from teacher training into Technology Enhanced Learning  and BOOM! – here I am, your friendly neighbourhood Learning Technology Manager, who is still all about words and language, but who now spends her daylight hours being more about HTML5 and SCORMs and flipping the blended learning MOOC…

…the thing is, as much as I love the world of technology, I still hanker for those days of adult literacy and burbling on about how getting to grips with language makes life just so much easier and opens so many possibilities (often literally as well as metaphorically). So it’s with a Ric Flair-style WOOOOOOO! I can announce that I have made a brief return to the world of adult literacy by way of an on-the-side freelance gig that has made me punch the air with glee.

Take a look at this:

So here’s a situation where refugees and indigenous people are living and working side by side through circumstance rather than choice. This has lead to frustration and confusion from both groups, not helped by the fact that the number of international languages being spoken across the board is massive – and the blocks to communication this has thrown up seems almost insurmountable. It makes sense then, to agree to use a lingua franca, and in this instance it has been agreed that this will be English.

This is where the Avallain Foundation comes in. The foundation wants to focus on people who have been left behind because of emergency and change, and they firmly believe that education is the only constant variable that can be the key to going back to society, the community or the labour market; something I’m very much in agreement with. They are also very aware that education isn’t limited to the classroom and that new technologies and the internet enable access to lifelong learning at a very broad scale.

So they (we) are developing adult literacy and numeracy curricula at 3 levels – beginner, intermediate and advanced – to be delivered using a blended learning format. The content is embedded in the stuff they need to know – so it’s all written around food hygiene, healthy living, disease prevention, computers and the internet and business and commerce. And that’s not stuff that the foundation decided that they need to know –  they actually went to Kakuma and they talked to the refugee and indigenous populations to find out what the potential students actually wanted.

What’s great is that I’m coming at this from two angles – I’m writing the level 3 literacy curriculum, but I’m also learning how to use Avallain’s in-house e-learning authoring tool to develop the online content based around it.

This has been the first time since my teaching days that I have been involved in the whole process; my current day job means that subject specialists give me content that I go on to develop electronically. I’d forgotten how much deeper the relationship with content and with the final electronic product goes when you are involved from the jotting-the-theoretical-content-down-on-the-back-of-a-fag-packet stage right through to the beta testing the electronic resource stage.

I wonder whether this is the subconscious reason why some academic staff don’t engage with e-learning at all – because, on some level, they don’t want to feel the disconnect that comes when they hand their content (their ‘knowledge’) over to someone who doesn’t understand the subject area, but is going to go on to develop that knowledge into something that they, as the subject specialists, don’t feel they have much ownership of.  I certainly feel as if I have done a much more immersive and ‘well-rounded’ piece of work if I write the content from a subject specialist perspective, and then go on to develop that content using my learning technology skills.

Developing these curricula is a wonderful, fulfilling experience – but it has certainly given me much pause for thought.

Make Mine a Double


I’ve been thinking a lot about the psychology of eLearning. I’m not even certain that ‘the psychology of eLearning’ is a thing – but it is something that I find myself banging on about a lot at the moment, especially when talking to academics, instructional designers and other learning technologists, so I may sound like a crazy fool. Or this is old news, in which case, you can stop reading now.

If you’re still reading, I’ll try to explain what I mean by way of one of my patented food analogies:

A couple of weeks ago I went back to my home town for the weekend to catch up with family and friends, and started the weekend in the pub. We started off in the local Wetherspoons, where my partner ordered a gin and tonic. He specifically ordered Tarquin’s gin (a brand that he had never tried before), and it was served in a standard tumbler with a couple of ice cubes and a slice of lime. And very nice it was too –  I had a sip and decided that I wanted to buy a bottle while I was in Cornwall.

We finished the night in our hotel’s bar, where my partner ordered another Tarquin’s gin and tonic. This time the drink was served in a long stemmed gin glass, again with ice and a slice of lemon. Again, I tried a sip – but this time it tasted much better. It shouldn’t have; it was the same mix of the same ingredients – but I am convinced that because it was served in a ‘proper’ gin glass and not a standard tumbler, it made a real difference. On a subconscious level, I guess felt that I was being ‘better cared for’. Maybe it had a placebo-like affect: because it looked nicer, it tasted nicer.

I think the same applies in eLearning; specifically via instructional design.

A lot of people involved in developing online resources suggest that function trumps design. As long as an online package does what it is supposed to do, then it works. And on a mechanistic level, yes, it does. I mean, who wants something that looks nice, but where the interactions don’t work? But I have seen a lot of fully-functioning elearning packages that, for all their drag and drop, fully-functioning multiple choice questions and multimedia elements, just look uninspiring. Lots of black text on a white background, lots of corporate clipart, and tonnes of ‘click forward for more of the same’. These packages do what they need to do, students work through them because they need to – but do they feel valued by the content? Are these packages the elearning equivalent of a gin and tonic in Wetherspoons? And, if you are paying up to 9k per year to study, should you, on some subliminal or subconscious level, feel valued by the institution you’ve given your money to?

I don’t know. Obviously, I can’t make a judgement based on a theory I haven’t really done anything with. Maybe I should talk to some students, see if I’m on to anything here, or just stating the bleeding obvious.

Don’t be Scared of the Dork


James Clay wrote a thought-provoking post last week. Called Show me the Evidence, James talks about how: “when demonstrating the potential of TEL and learning technologies to academics, the issue of evidence of impact often arises. You will have a conversation which focuses on the technology and then the academic or teacher asks for evidence of the impact of that technology.”

James cites fear as a key reason behind this, suggesting that many lecturers don’t have the capabilities to use IT, so lack the basic confidence to use learning technologies. To save face, and because it would be mortifying to have to confess to a lack of skills, they ask for the “evidence”. This then enables them to delay things.

Weirdly, I can’t think of a single occasion when an academic has asked me for empirical evidence or to cite the research framed around my work. I tend to go about things the other way-heading the academics off at the pass because I am the one who is afraid to look like a dunce in front of them.

I delivered a lunchtime session to academics looking at the flipped classroom model last week. The conversation turned to the (still) widely-held belief that anyone under 25 is a techy-wizard, while the rest of us can barely use our smartphones. (A different kind of stalling technique, perhaps? It’s always academics who bring this up.) I offered some ramblings about Marc Prensky’s ‘Digital Natives‘ theory being a load of old cock-and-bull, and that Dave White’s ‘Visitors and Residents‘ model was more realistic and less ‘pigeon-holey’. The group liked this as it appealed to their academic mindsets, so I was then able to sneakily show them some tools while they were feeling more at home.

Another thing I sometimes do is suggest that the academic in question might want to try out the method / tool that is being suggested, and then write a paper about the experience. Again, this appeals because it’s more in tune with how academics tend to work. I think a lot of the fear that James mentions comes from an assumption that learning technologists and academics speak two totally different, and incompatible languages. We don’t, but it can certainly be hard to prove it!