reflection

The Open Research Agenda

Here are the slides I’ll be using today for my presentation at the CALRG Annual Conference.  The Open Research Agenda is an international consultation exercise focused on identifying research priorities in open education.

You can read more about the project here:

The Open Research Agenda (2)

The Open Research Agenda (1)

Advertisements

Colonisers and edupunks (&c.): two cultures in OER?

I’ve started writing this post at the Open Education 2015 conference at the Fairmont Hotel in Vancouver because I want to try and capture some thoughts about the evolution of this movement and community.  But I’m finishing it from home after a little bit of time to digest and also after attending OpenUpTRU in Kamloops earlier in the week.

This has been my fifth consecutive Open Education conference and I’ve been privileged enough to hear from a lot of different people from around the world about their use of OER and the impact it has for them.  Over these years there has been a steady move towards raising the game with research into impact and strategising ways to mainstream the adoption of OER; perhaps the clearest example of this is the may presentations that have been devoted to open textbook adoption and efficacy studies at this conference.  This is entirely understandable given the co-ordinated focus in the USA on open textbook adoption as a tangible and measurable goal for advocacy and research.

Great things have been achieved by researchers working with the Open Education Group in this regard.  In terms of controlled studies which attempt to isolate the effects of moving to an open textbook while controlling for other variables (like instructors, etc.) there really isn’t any other game in town that comes close.  And there is a real need for this kind of work, since it is creating the body of evidence that can be used to reject the claim that open resources are of inferior quality.  The endgame here is to support widespread adoption of open textbooks in colleges.  This is something that can be measured and the savings calculated, so it’s a great strategic choice for advocates in the USA.

Now we have established that this research is great, I feel there are a couple of points to raise.  Firstly, a methodological issue related to the tension between two virtues of open textbooks that we like to put forward:  that they are ‘efficacious’ (they ’cause’ learning) [1] as established by controlled studies; and that they can be freely adapted.  How much adaptation can a text withstand before the efficacy studies – which are based on carefully controlling variables – must be repeated?  Of course, in many cases the textbooks are just adopted wholesale.  They are mapped onto common curricula and so can be used to teach a whole programme.  But if someone decides not to tamper with the textbook, isn’t the net result of all this just that the commercial textbook has been replaced by an open textbook?  But if they do ‘tamper’ with the textbook, might they be in danger of making their textbooks less ‘efficacious’?

Maybe that depends on how good they are at teaching.  What I mean by this is that, aside from all the fantastic savings made by students, the course may be taught in exactly the same way as before.  In effect, the open textbook strategy might (when fully realised) leave us with more or less the same educational systems as before (although a lot more affordable for many, and this would undoubtedly be a fine thing).

In effect, this is an attempt to ‘colonise’ an existing system by taking it over from within.  Maybe something more radical follows from this – open textbooks are a great way to introduce students and faculty to OER, and who knows what might happen a few years down the line in a situation where everyone knows about open?

For now, though, nothing much need change except using an open textbook. Except it’s not just an open textbook, because to scale up and keep making the case for efficacy the data gathered must grow, which means more metrics, open learning analytics, and possible homogenization of the learning process.

This was how I captured the thought at the time:

What was less obvious at the conference this year were the voices coming from a different part of the OER movement: the people who emphasize the radical potential of OER.

This end of the spectrum may be hard to clearly define.  They might be edupunks or critical pedagogues.  They might identify with the open source, copyleft, open data or open government movements outside of education.  They might just be libertarians who like the idea of greater personal freedom. But the thing that unites them is that OER is, for them, more about challenging existing practices and forms of knowledge transmission than replicating commercial provisions on open licences.

Because they’re a disparate bunch it’s hard to put a label on this group, even though by the title of this piece I’m referring to them as ‘edupunks (&c.)’.  The important thing is that they are more radical in ambition, and in that sense they occupy the opposite end of the spectrum from the ‘colonisers’.

Here are some illustrative comments shared on Twitter at the time.

There were plenty of others to choose from, as well as plenty of support for what is being achieved with open textbooks.  Robin actually went a step further and wrote a blog post which expressed her frustration with the dominance of open textbooks and outlined the kinds of things that she wants from a conference like Open Education.

  1. Engage learners in contributing to their learning materials so that knowledge becomes a community endeavor rather than a commodity that needs to be made accessible. To that end, let’s stop fetishizing the textbook, which is at best a low-bar pedagogical tool for transmitting information. OER is better than that.
  2. Make open licenses the focus of our advocacy for learners, teachers, scholars, which means explaining how the open license enables us to do more with the ideas that we ourselves as learners, teachers, scholars are generating. It’s not the open textbook, it’s the open license that matters here.
  3. Consider public funding models for open education (OER, open pedagogy, open access). “Philanthropy” is the wrong word for a model in which the public pays itself for what it needs and can generate on its own. And I am not buying that private, for-profit companies– while capable of being good community partners– are the only way we can build a public infrastructure for publishing and organizing and economically supporting open work.
  4. Build a better mission statement for why we work in the open. I took a stab here, but it was just one tiny specific start. I need help explaining this why. We need the why before we can develop the what (who cares about our open tools and apps and platforms? that’s the easy stuff, so let’s do it second). We need the why before we can assess whether or not we achieved success. Will working in the open serve a social justice vision? improve retention and enrollment? increase interdisciplinary collaboration and improve the quality of our scholarship? Yes? Why? How? And what will it look like if our vision succeeds?

So, should the open education movement seek to colonise education, or transform it?  In can be tempting to think that the difference here is really between evolution and revolution.  The colonisers want to evolve formal education in a helpful way while the ‘edupunks (&c.)’ are more interested in empowerment and the freedoms provided by open licensing.

We might also surmise that this is a false dichotomy. Most people are somewhere in the middle, and relatively few people go around calling themselves ‘edupunks’.  In some ways this can be seen as the return of the familiar gratis (‘colonisers’) vs libre (‘edupunk (&c.)’) distinction that has been with the OER movement since the very early days: is the OER movement about freedom, or about things being ‘free’?

C. P. Snow famously wrote about the divergence of science and the humanities in the influential The Two Cultures and the Scientific Revolution.  Snow foresaw that the aspirations, language and standards of validity of academic cultures were moving apart in ways that prevented cross-pollination of ideas and findings.  Thus, we have science professors who have never read Shakespeare, literature professors who cannot explain the laws of thermodynamics, and so on.  Now arguably there are more interdisciplinary thinkers than there used to be but education does still tend to siphon learners off into one or the other camp.

Without getting too far into that debate, I think we can use the basic idea of ‘Two Cultures’ as a way of thinking about changes in the OER movement, and being aware of people pulling in different directions.  Everyone is still part of the same conversation at the moment, but it doesn’t feel like it would take much to see new, more niche conferences and journals springing up.  In my view, both of these cultures need each other, because each ameliorates the vulnerabilities of the other and encourages attentiveness to the bigger picture.  So keep talking!


[1] I’m a little uncomfortable personally with the language of efficacy, which risks being scientistic – I’m not sure that isolating a lot of variables and then attributing any difference to the intervention is reliable in education research per se – though it is certainly commonplace and there is of course a need for evidence.

 

ROER4D Workshop, Banff 2015 – Day One

Today and tomorrow I’m in Banff ahead of the OE Global 2015 conference at the invitation of the ROER4D project to take part in their latest research workshop.  I’m interested to learn more about aspects of the project I’ve yet to encounter, and to meet more members of the wider ROER4D network.  (This blog present my impressions of events rather than a verbatim record of what was said.)

ROER4D is a big research project.  There are 18 sub projects and there are 86 researchers working across 26 countries and 16 time zones.  For a lot of the people who have travelled to Banff, Alberta in Canada this is the first time they have met face to face, and about half of them are new to me.

For such a diverse group to work together in the project, it has been necessary to develop a shared conceptual framework and collaborative working practices.  The ROER4D Bibliography of research into OER in the Global South is an important part of working towards such a framework. Here’s an infographic which shows the different strands of work across the project:

The range of data collected by the project is diverse, and to maximise the chances of combining data in useful ways will be increased by co-ordination of methodology and research practice where possible.  Some aspects will be highly contextual but where possible the project should strive to identify themes held in common.  Some research questions will be emphasised in some strands more than others but there is still value in pulling together all the relevant data for the key project clusters.  By mapping what is already known and sharing this throughout the project everyone should benefit from not having to tackle al the research in isolation. The different aspects of the project should complement each other.  This workshop provides an opportunity to create new connections and better coordinate across the project.  The workshop will focus on updating the collective understanding of progress made so far; sharing ideas for data analysis and data visualization; and discussing the  ROER4D final outputs and their anticipated formats.  There are several projects looking at specific areas of OER impact and opportunities for working together on similar issues and themes should be identified.

There was some interest in the OER Research Hub survey questions so I made these available to some of the group via http://tinyurl.com/OERRH-surveys.  Anyone is free to re-use our questions under a CC-BY licence – we only ask that attribution back to the OER Research Hub project is forthcoming in return.

After some brainstorming work the main objectives for the workshop were identified.  Most are keen to try and establish agreed methods for data curation and analysis which can be applied consistently across the project.  Another theme that emerged was the idea of making best use of any data collected through a consistent strategy for exploitation and evaluation.

Presentations

Sarah Goodier presented some work on the workpackage which looks at the role of public funding in supporting OER adoption and advocacy in South Africa.  This comprises desk research and interviews with policymakers and officials. A country report is expected early in 2016.  This presentation provoked some collective reflection around the difficulties of building up a holistic picture of change.

Amalia Toledo (Columbia) spoke about OER policy and advocacy in Chile, Columbia and Uruguay.  They are recording the current methods used by governments in the region to promote OER and Open Access.  Data was pulled from public databases as well as through desk research and interviews. Country reports are being produced (in Spanish) and there is a summary report that was written for UNESCO as well as a publication in Open Praxis.  The overall findings are:

  • A variety of funding sources for public education are identified
  • There is a lack of clear policy support in Columbia and Chile, but less so in Uruguay
  • Programmes in science, technology and innovation are being developed

There was also a series of ‘World Cafe‘ presentations from the impact study grantees to introduce the wider group to their ongoing studies.  I’ll summarise these very briefly here:

SP10.1 Freda Wolfenden (UK) – the OU team will work with teacher educators in West Africa to understand their engagement and response to OER, looking for changes in their understanding of their own practice; their understanding of their own subject; and social order changes within and beyond their institution.  The data will mostly be self-reported by teachers in training and will include attitudinal data as well as self-reported changes in practice.  Some baseline data will also come from more general surveys.

SP10.2 Atieno Adala (Kenya) – this study looks at the impact of OER adoption on expanding access to quality teacher education in sub-Saharan Africa, where there is a lack of well trained teachers.  12 universities across 10 countries will comprise the locations for the study.  Some existing research (e.g. Diallo, 2013, Niang, 2013) suggests that access to an improved curriculum can strengthen institutional capacity.  Evidence will be sought to defend this claim.  The secondary hypothesis will examine whether OER has a positive effect on the quality of the curriculum.  A quantitative analysis of student performance and institutional reporting will be used as evidence.

SP10.3 Michael Glover (South Africa) – study of 3-5 MOOC at University of Cape Town, which is developing a wider MOOC strategy working with FutureLearn.  How do adoption of OER (in a MOOC format) impact upon educator practices? The study will focus on post-MOOC teaching and research practices.  The definition of open educational practices by Beetham et al (2012) which identifies six indicators for OEP will be used to measure changes in practice.  Generally, this seemed like a good approach to measuring impact of this sort.  Interviews and classroom observations will also be conducted, and analytics from the MOOC portal collected.  Activity theory will be used as a conceptual framework.

SP10.4 Lauryn Oates (Canada) – Canadian Women for Women in Afghanistan is running the ‘Darakht-e Danesh‘ programme which makes educational materials available openly as an online library.  Once registered, users can search for OER by type of resource, subject, language, etc. The research will focus on whether access to OER improves teacher subject knowledge or pedagogical practice.  Analytics from the website will provide most of the data for the impact study, with surveys as a follow up.

SP10.5 Yasira Waqar (Pakistan) – investigating impact of OER on secondary and tertiary education in Pakistan.  OER is not popular in Pakistan and possibly associated with ‘devaluation’ of intellectual work (because it is ‘given away’).  Open access is less of an issue in Pakistan as copyright is not particularly well observed or respected.  (It might even be that ‘open’ in Pakistan just means ‘free and online’.)  The main research questions here try to evaluate impact by measuring adoption and identifying benefits to educators and learners, using Fullan’s theory of change as a framework.  Interviews, surveys and classroom observations will be used to collect data about changes in pedagogy, but it will be important to demonstrate the specific  efficacy of OER.

SP10.6 Shironica Karunanayaka (Sri Lanka) – the concept of OER is new to Sri Lanka, and this study will introduce teachers to the concept and ascertain whether or not this leads to some changes in teaching practice.  The hypotheses being investigated is whether integration of open materials into teaching leads to a change in perception of the practices of student teachers and improved quality of teaching and learning materials.  An action research approach will be taken following a professional development programme for student teachers.

The day concluded with discussion in groups according to the different workpackages across the ROER4D project. Because of the structure of the World Cafe session, it meant that the presenters did not see the presentations of others.  We had a discussion around ‘impact’ and the difficulty of establishing a causal relationship between adopting openness and the impacts that result.  It was felt that a general theory of impact as ‘change’ could be a practical way of proceeding and specificity can be brought out in the subsequent analysis.

image

The impact study leads wrote their main research hypotheses on post-it notes and then tried to categorise them into three or four main themes.  Out of this exercise can the following rubric of central themes:

  1. IMPACT ON TEACHING PRACTICES
  2. IMPACT ON PERCEPTIONS
  3. IMPACT ON STUDENT LEARNING/ACHIEVEMENT
  4. IMPACT ON QUALITY OF EDUCATIONAL MATERIALS
  5. INSTITUTIONAL IMPACT
  6. IMPACT ON SUBJECT KNOWLEDGE AND CONFIDENCE (OF EDUCATORS)

Of course, there are other possible ways to perform this categorisation (.g. 6 might be reducible to 1.), and it could be further broken down by subject and teaching level. Wordings of survey and interview questions should be as consistent as possible, and demographic questions should be absolutely consistent across both the impact studies and the ROER4D project as a whole.  The project leaders agreed to work together to harmonise their questions across the key hypothesis areas.

The OER Research Hub question bank might provide some inspiration for the wording of questions asked across the impact studies.

Learning from disruption #oer2015

I’m in Sausalito, California in the shadow of the Golden Gate Bridge for the Hewlett Grantees meeting this week.  Today is the start of proceedings proper, and I’m going to blog some of the presentations and seminars.  First off is Douglas Gayeton of Lexicon of Sustainability, which attempts to explain basic principles of economy and environment to a wide audience.

As a film-maker and photographer, Douglas has often reflected on the way that pictures omit as much as they include.  How can a picture capture the full story?  One way is to take lots of pictures and then turn these into a composite image.  This image can then be used to explain the overall message, perhaps with textual cues added.  We need to recast information in ways that people can understand.  (Compelling graphics can also be especially memorable.)  At the other end of the scale of transparency. he suggests, we might think about Latin versions of the Bible, which made religion obscure and opaque.

An underling assumption here is that when consumers are better informed they will make better choices.This seems like a fairly big assumption to me. (What about, for example, smokers who are fully aware of the dangers of their habit and yet continue to chuff away?  Motivation is also important.  Perhaps it is better to say that being well informed is a necessary but not sufficient condition of possibility for making good choices?  Alternatively, a paternalistic approach could make ‘good’ decisions on behalf of consumers without any need for the consumer to know what is good for them.)

Chains like Whole Foods now require clear labelling of products with GMO ingredients – Gayeton compares this to Martin Luther‘s insistence that the Bible be translated into native languages.  Consumers are also demanding transparency around the use of antibiotics to increase the weight of livestock.  Labelling around when and where fish were caught is expected to follow, as is information about grain origins.

It is argued that making improved information about food available to consumers is sustaining a national ‘locavore’ movement where localism, greater co-operation and seasonal eating replace the industrialisation of food production.  The ‘New Corner Store’ movement encourages consumers to ask for better options in their local stores.  Project Localize brings the message into schools.  Douglas take comfort from the various grass roots movements and small holdings: personal stories can be effective for communication.

I must admit that how this relates to OER wasn’t that clear to me, and there wasn’t much exploration of the disruptive elements which might be considered transferable.  I suppose that the idea is to change cultures through improved information though I suspect this is actually only half the battle.  There’s definitely a sense in which the message about OER, nuanced as it is, doesn’t always travel that far beyond the open education movement and its advocates.   But is the idea that we emphasize the hard data, analytics and metrics?  There doesn’t appear to be much of this in the materials we have been presented with today.   Should we instead focus on personal stories and narratives (which seem to be the focus here)?  Both?

liveblog: Predicting Giants at #altc #altc2014

Here are my notes from this afternoon’s session at the ALT-C 2014 conference. There were three presentations in this session.


Richard Walker (University of York) – Ground swells and breaking waves: findings from the 2014 UCISA TEL survey on learning technology trends, developments and fads

This national survey started in 2001 and has since expanded out from a VLE focus to all systems which support learning and teaching. The results are typically augmented by case studies which investigate particular themes. In 2014 there were 96 responses from 158 HE institutions that were solicited (61% response). Some of the findings:

  • Top drivers for TEL are to enhance quality, meet student expectations and improve access to learning for off-campus students
  • TEL development can be encouraged by soliciting stuent feedback
  • Lack of academic staff understanding of TEL has re-emerged as a barrier to TEL development, but time is still the main factor
  • Institutions perceive a lack of specialist support staff as a leading challenge to TEL activity
  • In future, mobile technologies and BYOD will still be seen as significant challenges, but not top as in last year
  • E-assessment is also a leading concern
  • Moodle (62%)is the most used VLE, with Blackboard (49%) the leading enterprise solution
  • Very small use of other open source or commercial solutions
  • Institutions are increasingly attempting to outsource their VLE solutions
  • Plagiarism and e-assessment tools are the most commonly supported tools
  • Podcasting is down in popularity, being supplanted by streaming services and recorded lectures, etc.
  • Personal response systems / clickers are up in popularity
  • Social networking tools are the leading non-centrally supported technology used by students
  • There is more interest in mobile devices (iOS, Android) but only a handful of institutions are engaging in staff development and pedagogic activity around these
  • Increasing numbers of institutions are making mobile devices available but few support this through policies which would integrate devices into regular practice
  • The longitudinal elements of the study suggest that content is the most important driver of TEL for distance learning
  • Less than a third of institutions have evaluated pedagogical activity around TEL.

 


Simon Kear (Tavistock & Portman NHS Foundation Trust; formerly Goldsmiths College, University of London) – Grasping the nettle: promoting institution-wide take-up of online assessment at Goldsmiths College

When we talk about online assessment we need to encourage clarity around processes and expected results but learners don’t need to know much about the tools involved.  Learners tend to want to avoid hybrid systems and prefer to have alternative ways of having their work submitted and assessed.

There are many different stakeholders involved in assessment, including senior management, heads of department, administrators, and student representatives.

Implementation can be helped through regular learning and teaching committees. It’s important to work with platforms that are stable and that can provide comprehensive support and resources.

Simon concluded by advancing the claim that within 5 years electronic marking of student work will be the norm.  This should lead to accepting a wider variety of multimedia formats for student work as well as more responsive systems of feedback.


Rachel Karenza Challen (Loughborough College) – Catching the wave and taking off: Embracing FELTAG at Loughborough College – moving from recommendations to reality

This presentation focused on cultural change in FE and the results of the Feltag survey.

  • Students want VLE materials to be of high quality because it makes them feel valued
  • The report recommends that all publicly funded programmes should have a 10% component which should be available online
  • SFA and ILR funding will require colleges to declare the amount of learning available online and this will not include just any interaction which takes place online (like meetings)
  • There is a concern that increasing the amount of learning that takes place online might make it harder to assess what is working
  • Changing curricula year by year makes it harder to prepare adequate e-learning – a stable situation allows for better planning and implementation
  • Ultimately, assessment requires expert input – machine marking and peer assessment can only get you so far
  • In future they intend to release a VLE plugin that others might be able to use
  • Within 5 years the 10% component will be raised to 50% – this means that 50% of provision at college level will be without human guidance and facilitation – is this reflective of the growing influence of the big academic publishers?  Content provided by commercial providers is often not open to being embedded or customised…
  • Ministerial aspirations around online learning may ultimately be politically driven rather than evidence-based.

Thinking Learning Analytics

I’m back in the Ambient Labs again, this time for a workshop on learning analytics for staff here at The Open University.


Challenges for Learning Analytics: Visualisation for Feedback

Denise Whitelock described the SaFeSEA project which is based around trying to give students meaningful feedback on their activities.  SaFeSEA was a response to high student dropout rates for 33% new OU students who don’t submit their first TMA.  Feedback on submitted writing prompts ‘advice for action’; a self reflective discourse with a computer.  Visualizations of these interactions can open a discourse between tutor and student.

Students can worry a lot about the feedback they receive.  Computers can offer a non-judgmental, objective feedback without any extra tuition costs.  OpenEssayist the structure of an essay; identifies key words and phrases; and picks out key sentences (i.e. those that are most representative of the overall content of the piece).  This analysis can be used to generate visual feedback, some forms of which are more easily understood than others.

Bertin (1977/81) provides a model for the visualization of data.   Methods can include diagrams which show how well connected difference passages are to the whole, or to generate different patterns that highlight different types of essay. These can be integrated with social network analysis & discourse analytics.

Can students understand this kind of feedback? Might they need special training?  Are these tools that could be used primarily by educators?  Would they also need special training?  In both case, it’s not entirely clear what kind of training this might be (information literacy?).  Can one tool be used to support writing across all disciplines or should such a tool be generic?

The Wrangler’s relationship with the Science Faculty

Doug Clow then presented on ‘data wrangling’ in the science faculty at The Open University.  IET collects information on student performance and presents this back to faculties in a ‘wrangler report’ able to feed back into future course delivery / learning design.

What can faculty do with these reports?  Data is arguably better at highlighting problems or potential problems than it is at solving them.  This process can perhaps get better at identifying key data points or performance indicators, but faculty still need to decide how to act based on this information.  If we move towards the provision of more specific guidance then the role of faculty could arguably ben diminished over time.

The relation between learning analytics and learning design in IET work with the faculties

Robin Goodfellow picked up these themes from a module team perspective.  Data can be understood as a way of closing the loop on learning design, creating a virtuous circle between the two.  In practice, there can be significant time delays in terms of processing the data in time for it to feed in.  But the information can still be useful to module teams in terms of thinking about course:

  • Communication
  • Experience
  • Assessment
  • Information Management
  • Productivity
  • Learning Experience

This can give rise to quite specific expectations about the balance of different activities and learning outcomes.  Different indicators can be identified and combined to standardize metrics for student engagement, communication, etc.

In this way, a normative notion of what a module should be can be said to be emerging.  (This is perhaps a good thing in terms of supporting course designers but may have worrying implications in terms of promoting homogeneity.)

Another selective element arises from the fact that it’s usually only possible to collect data from a selection of indicators:  this means that we might come to place too much emphasis on data we do have instead of thinking about the significance of data that has not been collected.

The key questions:

  • Can underlying learning design models be identified in data?
  • If so, what do these patterns correlate with?
  • How can all this be bundled up to faculty as something useful?
  • Are there implications for general elements of course delivery (e.g. forums, VLE, assessment)?
  • If we only permit certain kinds of data for consideration, does this lead to a kind of psychological shift where these are the only things considered to be ‘real’ or of value?
  • Is there a special kind of interpretative skill that we need in able to make sense of learning analytics?

Learning Design at the OU

Annie Bryan drilled a little deeper into the integration of learning design into the picture.   Learning design is now a required element of course design at The Open University.  There are a number of justifications given for this:

  • Quality enhancement
  • Informed decision making
  • Sharing good practice
  • Improving cost-effectiveness
  • Speeding up decision making
  • Improve online pedagogy
  • Explicitly represent pedagogical activity
  • Effective management of student workload

A number of (beta) tools for Learning Design have been produced.  These are focused on module information; learning outcomes; activity planning, and mapping modules and resources.  These are intended to support constructive engagement over the life of the course.   Future developments will also embrace a qualification level perspective which will map activities against qualification routes.

These tools are intended to help course teams think critically about and discuss the purpose of tolls and resources chosen in the context of the course as a whole and student learning experiences.  A design perspective can also help to identify imbalances in course structure or problematic parts of a course.

Ethical Use of New Technology in Education

Today Beck Pitt and I travelled up to Birmingham in the midlands of the UK to attend a BERA/Wiley workshop on technologies and ethics in educational research.  I’m mainly here on focus on the redraft of the Ethics Manual for OER Research Hub and to give some time over to thinking about the ethical challenges that can be raised by openness.  The first draft of the ethics manual was primarily to guide us at the start of the project but now we need to redraft it to reflect some of the issues we have encountered in practice.

Things kicked off with an outline of what BERA does and the suggestion that consciousness about new technologies in education often doesn’t filter down to practitioners.  The rationale behind the seminar seems to be to raise awareness in light of the fact that these issues are especially prevalent at the moment.

This blog post may be in direct contravention of the Chatham convention

This blog post may be in direct contravention of the Chatham convention

We were first told that these meetings would be taken under the ‘Chatham House Rule’ which suggests that participants are free to use information received but without identifying speakers or their affiliation… this seems to be straight into the meat of some of the issues provoked by openness:  I’m in the middle of life-blogging this as this suggestion is made.  (The session is being filmed but apparently they will edit out anything ‘contentious’.)

Anyway, on to the first speaker:


Jill Jameson, Prof. of Education and Co-Chair of the University of Greenwich
‘Ethical Leadership of Educational Technologies Research:  Primum non noncere’

The latin part of the title of this presentation means ‘do no harm’ and is a recognised ethical principle that goes back to antiquity.  Jameson wants to suggest that this is a sound principle for ethical leadership in educational technology.

After outlining a case from medical care Jameson identified a number of features of good practice for involving patients in their own therapy and feeding the whole process back into training and pedagogy.

  • No harm
  • Informed consent
  • Data-informed consultation on treatment
  • Anonymity, confidentiality
  • Sensitivity re: privacy
  • No coercion
  • ‘Worthwhileness’
  • Research-linked: treatment & PG teaching

This was contrasted with a problematic case from the NHS concerning the public release of patient data.  Arguably very few people have given informed consent to this procedure.  But at the same time the potential benefits of aggregating data are being impeded by concerns about sharing of identifiable information and the commercial use of such information.

In educational technology the prevalence of ‘big data’ has raised new possibilities in the field of learning analytics.  This raises the possibility of data-driven decision making and evidence-based practice.  It may also lead to more homogenous forms of data collection as we seek to aggregate data sets over time.

The global expansion of web-enabled data presents many opportunities for innovation in educational technology research.  But there are also concerns and threats:

  • Privacy vs surveillance
  • Commercialisation of research data
  • Techno-centrism
  • Limits of big data
  • Learning analytics acts as a push against anonymity in education
  • Predictive modelling could become deterministic
  • Transparency of performance replaces ‘learning
  • Audit culture
  • Learning analytics as models, not reality
  • Datasets >< information and stand in need of analysis and interpretation

Simon Buckingham-Shum has put this in terms of a utopian/dystopian vision of big data:

Leadership is thus needed in ethical research regarding the use of new technologies to develop and refine urgently needed digital research ethics principles and codes of practice.  Students entrust institutions with their data and institutions need to act as caretakers.

I made the point that the principle of ‘do no harm’ is fundamentally incompatible with any leap into the unknown as far as practices are concerned.  Any consistent application of the principle leads to a risk-averse application of the precautionary principle with respect to innovation.  How can this be made compatible with experimental work on learning analytics and sharing of personal data?  Must we reconfigure the principle of ‘do no harm’ so it it becomes ‘minimise harm’?  It seems that way from this presentation… but it is worth noting that this is significantly different to the original maxim with which we were presented… different enough to undermine the basic position?


Ralf Klamma, Technical University Aachen
‘Do Mechanical Turks Dream of Big Data?’

Klamma started in earnest by showing us some slides:  Einstein sticking his tongue out; stills from Dr. Strangelove; Alan Turing; a knowledge network (citation) visualization which could be interpreted as a ‘citation cartel’.  The Cold War image of scientists working in isolation behind geopolitical boundaries has been superseded by building of new communities.  This process can be demonstrated through data mining, networking and visualization.

Historical figures of the like of Einstein and Turing are now more like nodes on a network diagram – at least, this is an increasingly natural perspective.  The ‘iron curtain’ around research communities has dropped:

  • Research communities have long tails
  • Many research communities are under public scrutiny (e.g. climate science)
  • Funding cuts may exacerbate the problem
  • Open access threatens the integrity of the academy (?!)

Klamma argues that social network analysis and machine learning can support big data research in education.  He highlights the US Department of Homeland Security, Science and Technology, Cyber Security Division publication The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research as a useful resource for the ethical debates in computer science.  In the case of learning analytics there have been many examples of data leaks:

One way to approach the issue of leaks comes from the TellNET project.  By encouraging students to learn about network data and network visualisations they can be put in better control of their own (transparent) data.  Other solutions used in this project:

  • Protection of data platform: fragmentation prevents ‘leaks’
  • Non-identification of participants at workshops
  • Only teachers had access to learning analytics tools
  • Acknowledgement that no systems are 100% secure

In conclusion we were introduced to the concept of ‘datability‘ as the ethical use of big data:

  • Clear risk assessment before data collection
  • Ethcial guidelines and sharing best pracice
  • Transparency and accountability without loss of privacy
  • Academic freedom

Fiona Murphy, Earth and Environmental Science (Wiley Publishing)
‘Getting to grips with research data: a publisher perspective’

From a publisher perspective, there is much interest in the ways that research data is shared.  They are moving towards a model with greater transparency.  There are some services under development that will use DOI to link datasets and archives to improve the findability of research data.  For instance, the Geoscience Data Journal includes bi-direction linking to original data sets.  Ethical issues from a publisher point of view include how to record citations and accreditation; manage peer review and maintenance of security protocols.

Data sharing models may be open, restricted (e.g. dependent on permissions set by data owner) or linked (where the original data is not released but access can be managed centrally).

[Discussion of open licensing was conspicuously absent from this though this is perhaps to be expected from commercial publishers.]


Luciano Floridi, Prof. of Philosophy & Ethics of Information at The University of Oxford
‘Big Data, Small Patterns, and Huge Ethical Issues’

Data can be defined by three Vs: variety, velocity, and volume. (Options for a fourth have been suggested.)  Data has seen a massive explosion since 2009 and the cost of storage is consistently falling.  The only limits to this process are thermodynamics, intelligence and memory.

This process is to some extent restricted by legal and ethical issues.

Epistemological Problems with Big Data: ‘big data’ has been with us for a while generally should be seen as a set of possibilities (prediction, simulation, decision-making, tailoring, deciding) rather than a problem per se.  The problem is rather that data sets have become so large and complex that they are difficult to process by hand or with standard software.

Ethical Problems with Big Data: the challenge is actually to understand the small patterns that exist within data sets.  This means that many data points are needed as ways into a particular data set so that meaning can become emergent.  Small patterns may be insignificant so working out which patterns have significance is half the battle.  Sometimes significance emerges through the combining of smaller patterns.

Thus small patterns may become significant when correlated.  To further complicate things:  small patterns may be significant through their absence (e.g. the curious incident of the dog in the night-time in Sherlock Holmes).

A specific ethical problem with big data: looking for these small patterns can require thorough and invasive exploration of large data sets.  These procedures may not respect the sensitivity of the subjects of that data.  The ethical problem with big data is sensitive patterns: this includes traditional data-related problems such as privacy, ownership and usability but now also includes the extraction and handling of these ‘patterns’.  The new issues that arise include:

  • Re-purposing of data and consent
  • Treating people not only as means, resources, types, targets, consumers, etc. (deontological)

It isn’t possible for a computer to calculate every variable around the education of an individual so we must use proxies:  indicators of type and frequency which render the uniqueness of the individual lost in order to make sense of the data.  However this results in the following:

  1. The profile becomes the profiled
  2. The profile becomes predictable
  3. The predictable becomes exploitable

Floridi advances the claim that the ethical value of data should not be higher than the ethical value of that entity but demand at most the same degree of respect.

Putting all this together:  how can privacy be protected while taking advantage of the potential of ‘big data’?.  This is an ethical tension between competing principles or ethical demands: the duties to be reconciled are 1) safeguarding individual rights and 2) improving human welfare.

  • This can be understood as a result of polarisation of a moral framework – we focus on the two duties to the individual and society and miss the privacy of groups in the middle
  • Ironically, it is the ‘social group’ level that is served by technology

Five related problems:

  • Can groups hold rights? (it seems so – e.g. national self-determination)
  • If yes, can groups hold a right to privacy?
  • When might a group qualify as a privacy holder? (corporate agency is often like this, isn’t it?)
  • How does group privacy relate to individual privacy?
  • Does respect for individual privacy require respect for the privacy of the group to which the individual belongs? (big data tends to address groups (‘types’) rather than individuals (‘tokens’))

The risks of releasing anonymised large data sets might need some unpacking:  the example given was that during the civil war in Cote d’Ivoire (2010-2011) Orange released a large metadata set which gave away strategic information about the position of groups involved in the conflict even though no individuals were identifiable.  There is a risk of overlooking group interests by focusing on the privacy of the individual.

There are legal or technological instruments which can be employed to mitigate the possibility of the misuse of big data, but there is no one clear solution at present.  Most of the discussion centred upon collective identity and the rights that might be afforded an individual according to groups they have autonomously chosen and those within which they have been categorised.  What happens, for example, if a group can take a legal action but one has to prove membership of that group in order to qualify?  The risk here is that we move into terra incognito when it comes to the preservation of privacy.


Summary of Discussion

Generally speaking, it’s not enough to simply get institutional ethical approval at the start of a project.  Institutional approvals typically focus on protection of individuals rather than groups and research activities can change significantly over the course of a project.

In addition to anonymising data there is a case for making it difficult to reconstruct the entire data set so as to stop others from misuse.  Increasingly we don’t even know who learners are (e.g. MOOC) so it’s hard to reasonably predict the potential outcomes of an intervention.

The BERA guidelines for ethical research are up for review by the sounds of it – and a working group is going to be formed to look at this ahead of a possible meeting at the BERA annual conference.

My ORO report

I’ve just a quick look at my author report from the ORO repository of research published by members of The Open University.  I’m quite surprised to learn that I’ve accrued almost 1,300 downloads of materials I have archived here!

An up to date account of my ORO analytics can be found at http://oro.open.ac.uk/cgi/stats/report/authors/31087069bed3e4363443db857ead0546/. I suppose a 50% strike rate for open access publication ain’t bad… but there is probably room for improvement…

A Battle for Open?

Martin Weller has a thought-provoking editorial in the latest issue of JiME.  He argues that many of the battles for open education have been won but that the movement now faces the challenge of balancing all kinds of different aims and aspirations.  Is openness about freedom?  Is this an argument about business models or a philosophy of education?

These questions are couched in a wider narrative about finding pathways through times of change (especially rapid change or revolution). We often only see the underlying patterns of historical forces in retrospect:  as the philosopher Hegel tells us, the owl of Minerva ‘flies only at dusk’.  Not only are the issues complex and conflated; there is also the small matter of the education publishing industry that is keen to protect billions of dollars of revenue.  With all this is mind it can be hard to focus on the more prosaic problems we face on a day-to-day basis.

Martin appeals to the same ‘greenwashing’ analogy that Hal Plotkin used when I spoke with him in Washington DC earlier this year.  Nowadays environmental friendliness has penetrated the mainstream so successfully it can be hard to recall the way many corporations and lobbyists fought against a small environmental movement.  Brands are more than happy to present themselves as ‘green’ where before they denied the value of such a thing.  Their redefinition is known as ‘greenwashing’ and shows how a message can be co-opted by organisations which would appear at first to be excluded.  Can we say the same thing about open education as commercial providers become ‘providers of OER’?

Martin does a great job of showing why ‘battle’ might be an appropriate metaphor for what’s going on.  In the case of open access publishing, for instance, incumbent publishers want to preserve profits but open models have allowed new entrants into the market.  These new publication models are immediately thrust into challenges of scale and sustainability that can make it hard to preserve the openness that was the original impetus.

I won’t try to present any more of the argument here – it’s well worth reading in full.  But here’s the conclusion for the gist of it:

Openness has been successful in being accepted as an approach in higher education and widely adopted as standard practice. In this sense it has been victorious, but this can be seen as only the first stage in a longer, ongoing battle around the nature that openness should take. There are now more nuanced and detailed areas to be addressed, like a number of battles on different fronts. After the initial success of openness as a general ethos then the question becomes not ‘do you want to be open?’ but rather ‘what type of openness do you want?’ Determining the nature of openness in a range of contexts so that it retains its key benefits as an approach is the next major focus for the open education movement.

Open approaches complement the ethos of higher education, and also provide the means to produce innovation in a range of its central practices. Such innovation is both necessary and desirable to maintain the role and function of universities as they adapt. It is essential therefore that institutions and practitioners within higher education have ownership of these changes and an appreciation of what openness means. To allow others to dictate what form these open practices should take will be to abdicate responsibility for the future of education itself.