This morning I’m attending a talk given by Sharon Slade about the ethical dimensions of learning analytics (LA), part of a larger workshop devoted to LA at The Open University’s library on the Walton Hall campus.
I was a bit late from a previous meeting but Sharon’s slides are pretty clear so I’m just going to crack on with trying to capture the essence of the talk. Here are the guidelines currently influencing thinking in this area (with my comment in parentheses).
- LA as a moral practice (I guess people need to be reminded of this!)
- OU has a responsibility to use data for student benefit
- Students are not wholly defined by their data (Ergo partially defined by data?)
- Purpose and boundaries should be well defined and visible (transparency)
- Students should have the facility to update their own data
- Students as active agents
- Modelling approaches and interventions should be free from bias (Is this possible? What kind of bias should be avoided?)
- Adoptions of LA requires broad acceptance of the values and benefits the development of appropriate skills (Not sure I fully grasped this one)
Sharon was mainly outlining the results of some qualitative research done with OU staff and students. The most emotive discussion was around whether or not this use of student data was appropriate at all – many students expressed dismay that their data was being looked at, much less used to potentially determine their service provision and educational future (progress, funding, etc.). Many felt that LA itself is a rather intrusive approach which may not be justified by the benevolent intention to improve student support.
While there are clear policies in place around data protection (like most universities) there were concerns about the use of raw data and information derived from data patterns. There was lots of concern about the ability of the analysts to adequately understand the data they were looking at and treat it responsibly.
Students want to have a 1:1 relationship with tutors, and feel that LA can undermine this; although at the OU there are particular challenges around distance education at scale.
The most dominant issue surrounded the idea of being able to opt-out of having their data collected without this having an impact on their future studies or how they are treated by the university. The default position is one of ‘informed consent’, where students are currently expected to opt out if they wish. The policy will be explained to students at the point of registration and well as providing case studies and guidance for staff and students.
Another round of consultation is expected around the issue of whether students should have an opt-out or opt-in model.
There is an underlying paternalistic attitude here – the university believes that it knows best with regard to the interests of the students – though it seems to me that this potentially runs against the idea of a student centred approach.
Some further thoughts/comments:
- Someone like Simon Buckingham-Shum will argue that the LA *is* the pedagogy – this is not the view being taken by the OU but we can perhaps identify a potential ‘mission creep’
- Can we be sure that the analyses we create through LA are reliable? How?
- The more data we collect and the more open it is then the more effective LA can be – and the greater the ethical complexity
- New legislation requires that everyone will have the right to opt-out but it’s not clear that this will necessarily apply to education
- Commercialisation of data has already taken place in some initiatives
Doug Clow then took the floor and spoke about other LA initiatives. He noted that the drivers behind interest in LA are very diverse (research, retention, support, business intelligence, etc). Some projects of note include:
- SOLAR – Society for Learning Analytics Research
- LAK Conference – Learning Analytics and Knowledge Conference
- LASI Workshops – Learning Analytics Summer Institute
- Journal of Learning Analytics
- LACE – Learning Analytics Community Exchange
Many projects are attempting to produce the correct kind of ‘dashboard’ for LA. Another theme is around the extent to which LA initiatives can be scaled up to form a larger infrastructure. There is a risk that with LA we focus only on the data we have access to and everything follows from there – Doug used the metaphor of darkness/illumination/blinding light. Doug also noted that machine learning stands to benefit greatly from LA data, and LA generally should be understood within the context of trends towards informal and blended learning as well as MOOC provision.
Overall, though, it seems that evidence for the effectiveness of LA is still pretty thin with very few rigorous evaluations. This could reflect the age of the field (a lot of work has yet to be published) or alternatively the idea that LA isn’t really as effective as some hope. For instance, it could be that any intervention is effective regardless of whether it has some foundation in data that has been collected (nb. ‘Hawthorne effect‘).