This presentation outlines some key considerations for researchers working in the fields of open education, OER and MOOC. Key lines of debate in the open education movement is described and critically assessed. A reflective overview of the award-winning OER Research Hub project will be used to frame several key considerations around the methodology and purpose of OER research (including ‘impact’ and ‘open practices’). These will be compared with results from a 2016 OER Hub consultation with key stakeholders in the open education movement on research priorities for the sector. The presentation concludes with thoughts on the potential for openness to act as a disruptive force in higher education.
Today at Open Education 2016 I presented the provisional results of a research consultation exercise we have been doing at OER Hub over the last year. Several people asked for copies of the slides, which are available here and on the OER Hub SlideShare account.
All feedback welcome. You can still take part in the project by completing the form at tinyurl.com/2016ORA.
Here are the slides I’ll be using today for my presentation at the CALRG Annual Conference. The Open Research Agenda is an international consultation exercise focused on identifying research priorities in open education.
You can read more about the project here:
Here is the latest list of books available for review from JiME. If you’re interested in reviewing any of the following then get in touch with me through Twitter or via rob.farrow [at] open.ac.uk to let me know which volume you are interested in and some of your reviewer credentials.
Reviews will be due at the end of February 2016, and should be in the region of 1500-2000 words. You can see examples of previous reviews at http://jime.open.ac.uk/.
If you’re an academic publisher and you’re reading this you my have noted we have a lot of books from Routledge in the backlog. If you’d like to have your books considered fro review in JiME then please mail them for my attention at the address in the sidebar.
- Curtis J. Bonk, Mimi M. Lee, Thomas C. Reeves & Thomas H. Reynolds (eds.) (2015) MOOCs and Open Education around the world. Routledge: Abingdon and New York. link
- Charles D. Dziuban, Anthony G. Picciano, Charles R. Graham & Patsy D. Moskal (2016). Conducting Research in Online and Blended Learning Environments. Routledge: Abingdon and New York. link
- Susan Garvis & Narelle Lemon (eds.) (2016). Understanding Digital Technologies and Young Children: An International Perspective. Routledge: Abingdon and New York. link
- Seth Giddings (2014). Gameworlds: Virtual Media and Children’s Everyday Play. Bloomsbury Academic. link
- Lori Diane Hill & Felice J. Levine (eds.) (2015). World Education Research Yearbook 2015. Routledge: Abingdon. link
- Wanda Hurren & Erika Hasebe-Ludt (eds.) (2014). Contemplating Curriculum – Genealogies, Times, Places. Routledge: London and New York. link
- Phyllis Jones (ed.) (2014). Bringing Insider Perspectives into Inclusive Learner Teaching – Potentials and challenges for educational professionals. Routledge: London and New York. link
- David Killick (2015). Developing the Global Student: Higher education in an era of globalization. Routledge: London and New York. link
- Piet A. M. Kommers, Pedro Isaias & Tomayess Issa (2015). Perspectives on Social Media – a yearbook. Routledge: London and New York. link
- Angela McFarlane (2015). Authentic Learning for the Digital Generation – realising the potential of technology in the classroom. Routledge: Abingdon. link
- Jill Porter (ed.) (2015). Understanding and Responding to the Experience of Disability. Routledge: London and New York. link
- Steven Warburton & Stylianos Hatzipanagos (eds.) (2013). Digital Identity and Social Media. IGI Global: Hershey, PA. link
For the morning of Day Two of the workshop the group split into working groups. I floated between the groups and tried to capture a sense of what was being discussed in each.
Qualitative Data Analysis (led by Freda Wolfenden & David Porter)
Freda presented some key things to consider:
- What does data analysis mean in the context of a research inquiry?
- The relation of data analysis and research dissemination
- Alternative forms of data analysis
- Drawing conclusions from data analysis and evaluating evidence
- Findings should be relevant and credible
- Be aware of the relationship between the research rationale and data analysis
- Several approaches to data analysis might be taken within one research project in order to meet different needs or ask different questions
- Four types of analysis: thematic, frequency, discourse, causal
David then spoke about qualitative data analysis and OER studies. He said that qualitative data is really important for understanding how practitioners perceive the influence of OER on their own practice. He then connected this to the idea of communicating research findings through images and stories, using the ‘Artic Death Spiral’ an as example.
David joined BC Campus in 2003 he become involved in the Online Program Development Fund (OPDF), a programme fubnded by the Canadian government to develop online learning materials under open licence. It became apparent that liberal arts, health and science were the subject areas with most interest in this approach. These projects were seen as successful, but the worry of the funders was that there were pockets of activity rather than wholesale adoption, In 2012, the Canadian government tried to further stimulate adoption by funding the production of open textbooks.
To support this work, research was done into the impact, successes and failures of the OPDF project. They proceeded by interviewing and looking at the existing literature to structure the study, identifying gaps in knowledge and any potential methodological barriers. Seven themes (comprising quality, instructional design, technologies, business models, cultures, and policies and localisation) were identified. This also afforded an opportunity to reflect on the important of establishing how well OER was understood in the various institutions.
Ultimately (Third Generation) Activity Theory (Engestromm, Nardi) was selected as a model for understanding the impact of OER as a whole, and and as a framework for producing and aligning interview questions. Interviews took about an hour and were subsequently transcribed.
Qualitative data analysis can produce richer understandings of context (Weiss, 1995). Coding was done through Nvivo and ATLAS.ti (special software for thematic analysis). 203 codes were identified, and their frequency and proximity to each other were analysed. This was still too many for a reasonable analysis, so these were then clustered into nine overall themes (some including as many as 46 sub-themes). (It’s worth thinking carefully about the relationship between the questions asked and the themes that emerged, since themes are bound to be in the transcript if questions are asked about them – RF.)
The outcome was that a deep understanding of OER implementation was still lacking, and new tools and practices would have to be introduced in order to drive open textbook adoption. This influenced the design of the new framework for reviewing and distributing open resources.
Quantitative Data Analysis
When the Q&A session began I headed over to the discussion of quantitative data. I was coming into the discussion that was already getting into the nitty gritty, but here are some of the points that I took away:
- A consistent approach to categorising resources and how they are used (driven by subject understanding rather than data-driven)
- There are different ways of categorising OER – according to the reason they were produced; formatting; level of production (individual/institutional)
- In producing a matrix for analysing quantitative data there should be some flexibility so as to account for regional / cultural difference, etc.; different groupings might be appropriate for different regions / countries
- Piloting the method with 2-3 studies from within ROER4D is a good way to evaluate the approach taken
- Stratification of the sample can be achieved by categorising the institutions according to size, level, purpose, etc. This would allow for comparison across and within countries
- One approach could be to use respondent codes (or some other naming convention) consistently across both the qualitative and quantitative analysis processes
After coffee the focus moved to strategies for data curation and communication.
Data Curation and Communication
The commitment to Open Research is the foundational principle that guides ROER4D in its approach to creating and sharing research data, documents and other outputs. The five key aspects of Open Research so far identified are:
- Transparency in the research process
- Open licensing on all project outputs
- Maximising human readability and accessibility through multiple locations and open formats
- Maximising machine readability through online open formats (such as .xml)
- Long-term preservation curation and accessibility of outputs through a multi-platform data management plan
However, openness is a concept with which many researchers will be unfamiliar, and that an uncritical approach to openness may result in problems. Thus, the commitment to openness is qualified through the two considerations below:
- Open access to resources where openness adds value
- Protection of the dignity and privacy of individuals involved
Information needs to be organised and communicated if it is to have impact and good visibility. This is especially important for ROER4D in raising awareness of OER use in the Global South. Once this is in place then it can be communicated to different audience in an iterative feedback loop. For ROER4D, there are particular challenges around languages, diverse culture, and measuring the impact of the project. Effective use of metadata is crucial here – we might even call it a ‘love note to the future‘! URI / DOI should be employed to track the use of data by others. It’s also important to make sure that you comply with your own institutional data curation policies.
When thinking about whether to release data on a CC0 licence it’s important to realise that this does not require attribution and there’s nothing to stop anyone working with this data and failing to give any attribution to the original researchers. Effective registration of metadata about the project outputs on repositories will encourage better propagation of the research.
One thing we didn’t have time to discuss was how it was anticipated that people would arrive at the UCT repository in the first place. (Maybe the idea is that the OpenUCT repository has good integration with search engines.)
Both a book summarising the ROER4D project and an interactive research report are anticipated. The latter could include multimedia content summarising different strands of the work and link through to the more detailed reports and the open data itself. Through a modular approach to reporting it should be possible to generate reports with different emphases or geospatial dimensions.
After lunch, Atieno Adala gave a neat summary of things to think about when writing research articles and gave an overview of good practice.
I then presented some work from the OER Research Hub and OER World Map projects. This was an impromptu activity, but a good opportunity to bring the map project to the attention of a network who are potentially really important for uptake among the global community. Here are the slides I used, some of which are taken from the OER 15 presentation last week.
Next Patricia Arinto gave an overview of the different dimensions of impact that the ROER4D Impact Studies will look at. These covered a range of the spectrum of potential OER impact such as educator and student practice, institutional impact, effect on quality of resources,
From this point on the meeting broke into smaller working groups and I drifted off to the GO-GN Global Graduate Network meeting of PhD students, some of whom are likely to spend more time at The Open University (UK) which is taking over administration of the network.
Both the ROER4D project and the GO-GN network have tracks in the OER Global conference as we progress through the week in picturesque Banff.
Slides for my presentation tomorrow at the 8th Grace EDEN workshop.
The video recording from my research presentation at OCWC 2014 is now available at http://videolectures.net/ocwc2014_farrow_oer_impact/. It’s not possible to embed here but they have a nice player on their site.
This presentation gives an overview of the OER Research Hub project, some of the methodological and epistemological issues we encounter, and how we propose to ameliorate these through the technologies we use to investigate key questions facing the OER movement.
OER Impact: Collaboration, Evidence, Synthesis
Here are my slides from today’s presentation: feedback welcome as always.
I’m back in the Ambient Labs again, this time for a workshop on learning analytics for staff here at The Open University.
Challenges for Learning Analytics: Visualisation for Feedback
Denise Whitelock described the SaFeSEA project which is based around trying to give students meaningful feedback on their activities. SaFeSEA was a response to high student dropout rates for 33% new OU students who don’t submit their first TMA. Feedback on submitted writing prompts ‘advice for action’; a self reflective discourse with a computer. Visualizations of these interactions can open a discourse between tutor and student.
Students can worry a lot about the feedback they receive. Computers can offer a non-judgmental, objective feedback without any extra tuition costs. OpenEssayist the structure of an essay; identifies key words and phrases; and picks out key sentences (i.e. those that are most representative of the overall content of the piece). This analysis can be used to generate visual feedback, some forms of which are more easily understood than others.
Bertin (1977/81) provides a model for the visualization of data. Methods can include diagrams which show how well connected difference passages are to the whole, or to generate different patterns that highlight different types of essay. These can be integrated with social network analysis & discourse analytics.
Can students understand this kind of feedback? Might they need special training? Are these tools that could be used primarily by educators? Would they also need special training? In both case, it’s not entirely clear what kind of training this might be (information literacy?). Can one tool be used to support writing across all disciplines or should such a tool be generic?
The Wrangler’s relationship with the Science Faculty
Doug Clow then presented on ‘data wrangling’ in the science faculty at The Open University. IET collects information on student performance and presents this back to faculties in a ‘wrangler report’ able to feed back into future course delivery / learning design.
What can faculty do with these reports? Data is arguably better at highlighting problems or potential problems than it is at solving them. This process can perhaps get better at identifying key data points or performance indicators, but faculty still need to decide how to act based on this information. If we move towards the provision of more specific guidance then the role of faculty could arguably ben diminished over time.
The relation between learning analytics and learning design in IET work with the faculties
Robin Goodfellow picked up these themes from a module team perspective. Data can be understood as a way of closing the loop on learning design, creating a virtuous circle between the two. In practice, there can be significant time delays in terms of processing the data in time for it to feed in. But the information can still be useful to module teams in terms of thinking about course:
- Information Management
- Learning Experience
This can give rise to quite specific expectations about the balance of different activities and learning outcomes. Different indicators can be identified and combined to standardize metrics for student engagement, communication, etc.
In this way, a normative notion of what a module should be can be said to be emerging. (This is perhaps a good thing in terms of supporting course designers but may have worrying implications in terms of promoting homogeneity.)
Another selective element arises from the fact that it’s usually only possible to collect data from a selection of indicators: this means that we might come to place too much emphasis on data we do have instead of thinking about the significance of data that has not been collected.
The key questions:
- Can underlying learning design models be identified in data?
- If so, what do these patterns correlate with?
- How can all this be bundled up to faculty as something useful?
- Are there implications for general elements of course delivery (e.g. forums, VLE, assessment)?
- If we only permit certain kinds of data for consideration, does this lead to a kind of psychological shift where these are the only things considered to be ‘real’ or of value?
- Is there a special kind of interpretative skill that we need in able to make sense of learning analytics?
Learning Design at the OU
Annie Bryan drilled a little deeper into the integration of learning design into the picture. Learning design is now a required element of course design at The Open University. There are a number of justifications given for this:
- Quality enhancement
- Informed decision making
- Sharing good practice
- Improving cost-effectiveness
- Speeding up decision making
- Improve online pedagogy
- Explicitly represent pedagogical activity
- Effective management of student workload
A number of (beta) tools for Learning Design have been produced. These are focused on module information; learning outcomes; activity planning, and mapping modules and resources. These are intended to support constructive engagement over the life of the course. Future developments will also embrace a qualification level perspective which will map activities against qualification routes.
These tools are intended to help course teams think critically about and discuss the purpose of tolls and resources chosen in the context of the course as a whole and student learning experiences. A design perspective can also help to identify imbalances in course structure or problematic parts of a course.
Today I’m in the research laboratories in the Jennie Lee Building at The Institute of Educational Technology (aka work) for the ELESIG Guerrilla Research Event. Martin Weller began the session with an outline of the kind of work that goes into preparing unsuccessful research proposals. Using figures from the UK research councils he estimates that the ESRC alone attracts bids (which it does not fund) equivalent to 65 work years every year (2000 failed bids x 12 days per bid). This work is not made public in any way and can be considered lost.
He then went on to discuss some different digital scholarship initiatives – like a meta educational technology journal based on aggregation of open articles; MOOC research by Katy Jordan; an app built at the OU; DS106 Digital Storytelling – these have elements of what is being termed ‘guerrilla research’. These include:
- No permissions (open access, open licensing, open data)
- Quick set up
- No business case required
- Allows for interdisciplinarity unconstrained by tradition
- Using free tools
- Building open scholarship identity
- Kickstarter / enterprise funding
Such initiatives can lead to more traditional forms of funding and publication; and the two at least certainly co-exist. But these kinds of activities are not always institutionally recognised, giving rise to a number of issues:
- Intellectual property – will someone steal my work?
- Can I get institutional recognition?
- Do I need technical skills?
- What is the right balance between traditional and digital scholarship?
- Ethical concerns about the use of open data – can consent be assumed? Even when dealing with personal or intimate information?
Tony Hirst then took the floor to speak about his understanding of ‘guerrilla research’. He divided his talk into the means, opportunity and motive for this kind of work.
First he spoke about the use of the commentpress WordPress theme to disaggregate the Digital Britain report so that people could comment online. The idea came out of a tweet but within 3 months was being funded by the Cabinet Office.
In 2009 Tony produced a map of MP expense claims which was used by The Guardian. This was produced quickly using open technologies and led to further maps and other ways of exploring data stories. Google Ngrams is a tool that was used to check for anachronistic use of language in Downton Abbey.
In addition to pulling together recipes using open tools and open data is to use innovative codings schemes. Mat Morrison (@mediaczar) used this to produce an accession plot graph of the London riots. Tony has reused this approach – so another way of approaching ‘guerrilla research’ is to try to re-appropriate existing tools.
Another approach is to use data to drive a macroscopic understanding of data patterns, producing maps or other visualizations from very large data sets, helping sensemaking and interpretation. One important consideration here is ‘glanceability‘ – whether the information has been filtered and presented so that the most important data are highlighted and the visual representation conveys meaning successfully to the view.
Data.gov.uk is a good source of data: the UK government publishes large amounts of information on open licence. Access to data sets like this can save a lot of research money, and combining different data sets can provide unexpected results. Publishing data sets openly supports this method and also allows others to look for patterns that original researchers might have missed.
Google supports custom searches which can concentrate on results from a specific domain (or domains) and this can support more targeted searches for data. Freedom of information requests can also be a good source of data; publicly funded bodies like universities, hospitals and local government all make data available in this way (though there will be exceptions). FOI requests can be made through whatdotheyknow.com. Google spreadsheets support quick tools for exploring data such as sliding filters and graphs.
OpenRefine is another tool which Tony has found useful. It can cluster open text responses in data sets according to algorithms and so replace manual coding of manuscripts. The tool can also be used to compare with linked data on the web.
Tony concluded his presentation with a comparison of ‘guerrilla research’ and ‘recreational research’. Research can be more creative and playful and approaching it in this way can lead to experimental and exploratory forms of research. However, assessing the impact of this kind of work might be problematic. Furthermore, going through the process of trying to get funding for research like this can impede the playfulness of the endeavour.
A workflow for getting started with this kind of thing:
- Download openly available data: use open data, hashtags, domain searches, RSS
- DBpedia can be used to extract information from Wikipedia
- Clean data using OpenRefine
- Upload to Google Fusion Tables
- From here data can be mapped, filtered and graphed
- Use Gephi for data visualization and creating interactive widgets
- StackOverflow can help with coding/programming
(I have a fuller list of data visualization tools on the Resources page of OER Impact Map.)