technology

Workshop Notes: #Ethics and #LearningAnalytics

This morning I’m attending a talk given by Sharon Slade about the ethical dimensions of learning analytics (LA), part of a larger workshop devoted to LA at The Open University’s library on the Walton Hall campus.

I was a bit late from a previous meeting but Sharon’s slides are pretty clear so I’m just going to crack on with trying to capture the essence of the talk.  Here are the guidelines currently influencing thinking in this area (with my comment in parentheses).

  1. LA as a moral practice (I guess people need to be reminded of this!)
  2. OU has a responsibility to use data for student benefit
  3. Students are not wholly defined by their data (Ergo partially defined by data?)
  4. Purpose and boundaries should be well defined and visible (transparency)
  5. Students should have the facility to update their own data
  6. Students as active agents
  7. Modelling approaches and interventions should be free from bias (Is this possible? What kind of bias should be avoided?)
  8. Adoptions of LA requires broad acceptance of the values and benefits the development of appropriate skills (Not sure I fully grasped this one)

Sharon was mainly outlining the results of some qualitative research done with OU staff and students. The most emotive discussion was around whether or not this use of student data was appropriate at all – many students expressed dismay that their data was being looked at, much less used to potentially determine their service provision and educational future (progress, funding, etc.). Many felt that LA itself is a rather intrusive approach which may not be justified by the benevolent intention to improve student support.

While there are clear policies in place around data protection (like most universities) there were concerns about the use of raw data and information derived from data patterns. There was lots of concern about the ability of the analysts to adequately understand the data they were looking at and treat it responsibly.

Students want to have a 1:1 relationship with tutors, and feel that LA can undermine this; although at the OU there are particular challenges around distance education at scale.

The most dominant issue surrounded the idea of being able to opt-out of having their data collected without this having an impact on their future studies or how they are treated by the university. The default position is one of ‘informed consent’, where students are currently expected to opt out if they wish. The policy will be explained to students at the point of registration and well as providing case studies and guidance for staff and students.

Another round of consultation is expected around the issue of whether students should have an opt-out or opt-in model.

There is an underlying paternalistic attitude here – the university believes that it knows best with regard to the interests of the students – though it seems to me that this potentially runs against the idea of a student centred approach.

Some further thoughts/comments:

  • Someone like Simon Buckingham-Shum will argue that the LA *is* the pedagogy – this is not the view being taken by the OU but we can perhaps identify a potential ‘mission creep’
  • Can we be sure that the analyses we create through LA are reliable?  How?
  • The more data we collect and the more open it is then the more effective LA can be – and the greater the ethical complexity
  • New legislation requires that everyone will have the right to opt-out but it’s not clear that this will necessarily apply to education
  • Commercialisation of data has already taken place in some initiatives

Doug Clow then took the floor and spoke about other LA initiatives.  He noted that the drivers behind interest in LA are very diverse (research, retention, support, business intelligence, etc).  Some projects of note include:

Many projects are attempting to produce the correct kind of ‘dashboard’ for LA.  Another theme is around the extent to which LA initiatives can be scaled up to form a larger infrastructure.  There is a risk that with LA we focus only on the data we have access to and everything follows from there – Doug used the metaphor of darkness/illumination/blinding light. Doug also noted that machine learning stands to benefit greatly from LA data, and LA generally should be understood within the context of trends towards informal and blended learning as well as MOOC provision.

Overall, though, it seems that evidence for the effectiveness of LA is still pretty thin with very few rigorous evaluations. This could reflect the age of the field (a lot of work has yet to be published) or alternatively the idea that LA isn’t really as effective as some hope.  For instance, it could be that any intervention is effective regardless of whether it has some foundation in data that has been collected (nb. ‘Hawthorne effect‘).

Ethical Use of New Technology in Education

Today Beck Pitt and I travelled up to Birmingham in the midlands of the UK to attend a BERA/Wiley workshop on technologies and ethics in educational research.  I’m mainly here on focus on the redraft of the Ethics Manual for OER Research Hub and to give some time over to thinking about the ethical challenges that can be raised by openness.  The first draft of the ethics manual was primarily to guide us at the start of the project but now we need to redraft it to reflect some of the issues we have encountered in practice.

Things kicked off with an outline of what BERA does and the suggestion that consciousness about new technologies in education often doesn’t filter down to practitioners.  The rationale behind the seminar seems to be to raise awareness in light of the fact that these issues are especially prevalent at the moment.

This blog post may be in direct contravention of the Chatham convention

This blog post may be in direct contravention of the Chatham convention

We were first told that these meetings would be taken under the ‘Chatham House Rule’ which suggests that participants are free to use information received but without identifying speakers or their affiliation… this seems to be straight into the meat of some of the issues provoked by openness:  I’m in the middle of life-blogging this as this suggestion is made.  (The session is being filmed but apparently they will edit out anything ‘contentious’.)

Anyway, on to the first speaker:


Jill Jameson, Prof. of Education and Co-Chair of the University of Greenwich
‘Ethical Leadership of Educational Technologies Research:  Primum non noncere’

The latin part of the title of this presentation means ‘do no harm’ and is a recognised ethical principle that goes back to antiquity.  Jameson wants to suggest that this is a sound principle for ethical leadership in educational technology.

After outlining a case from medical care Jameson identified a number of features of good practice for involving patients in their own therapy and feeding the whole process back into training and pedagogy.

  • No harm
  • Informed consent
  • Data-informed consultation on treatment
  • Anonymity, confidentiality
  • Sensitivity re: privacy
  • No coercion
  • ‘Worthwhileness’
  • Research-linked: treatment & PG teaching

This was contrasted with a problematic case from the NHS concerning the public release of patient data.  Arguably very few people have given informed consent to this procedure.  But at the same time the potential benefits of aggregating data are being impeded by concerns about sharing of identifiable information and the commercial use of such information.

In educational technology the prevalence of ‘big data’ has raised new possibilities in the field of learning analytics.  This raises the possibility of data-driven decision making and evidence-based practice.  It may also lead to more homogenous forms of data collection as we seek to aggregate data sets over time.

The global expansion of web-enabled data presents many opportunities for innovation in educational technology research.  But there are also concerns and threats:

  • Privacy vs surveillance
  • Commercialisation of research data
  • Techno-centrism
  • Limits of big data
  • Learning analytics acts as a push against anonymity in education
  • Predictive modelling could become deterministic
  • Transparency of performance replaces ‘learning
  • Audit culture
  • Learning analytics as models, not reality
  • Datasets >< information and stand in need of analysis and interpretation

Simon Buckingham-Shum has put this in terms of a utopian/dystopian vision of big data:

Leadership is thus needed in ethical research regarding the use of new technologies to develop and refine urgently needed digital research ethics principles and codes of practice.  Students entrust institutions with their data and institutions need to act as caretakers.

I made the point that the principle of ‘do no harm’ is fundamentally incompatible with any leap into the unknown as far as practices are concerned.  Any consistent application of the principle leads to a risk-averse application of the precautionary principle with respect to innovation.  How can this be made compatible with experimental work on learning analytics and sharing of personal data?  Must we reconfigure the principle of ‘do no harm’ so it it becomes ‘minimise harm’?  It seems that way from this presentation… but it is worth noting that this is significantly different to the original maxim with which we were presented… different enough to undermine the basic position?


Ralf Klamma, Technical University Aachen
‘Do Mechanical Turks Dream of Big Data?’

Klamma started in earnest by showing us some slides:  Einstein sticking his tongue out; stills from Dr. Strangelove; Alan Turing; a knowledge network (citation) visualization which could be interpreted as a ‘citation cartel’.  The Cold War image of scientists working in isolation behind geopolitical boundaries has been superseded by building of new communities.  This process can be demonstrated through data mining, networking and visualization.

Historical figures of the like of Einstein and Turing are now more like nodes on a network diagram – at least, this is an increasingly natural perspective.  The ‘iron curtain’ around research communities has dropped:

  • Research communities have long tails
  • Many research communities are under public scrutiny (e.g. climate science)
  • Funding cuts may exacerbate the problem
  • Open access threatens the integrity of the academy (?!)

Klamma argues that social network analysis and machine learning can support big data research in education.  He highlights the US Department of Homeland Security, Science and Technology, Cyber Security Division publication The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research as a useful resource for the ethical debates in computer science.  In the case of learning analytics there have been many examples of data leaks:

One way to approach the issue of leaks comes from the TellNET project.  By encouraging students to learn about network data and network visualisations they can be put in better control of their own (transparent) data.  Other solutions used in this project:

  • Protection of data platform: fragmentation prevents ‘leaks’
  • Non-identification of participants at workshops
  • Only teachers had access to learning analytics tools
  • Acknowledgement that no systems are 100% secure

In conclusion we were introduced to the concept of ‘datability‘ as the ethical use of big data:

  • Clear risk assessment before data collection
  • Ethcial guidelines and sharing best pracice
  • Transparency and accountability without loss of privacy
  • Academic freedom

Fiona Murphy, Earth and Environmental Science (Wiley Publishing)
‘Getting to grips with research data: a publisher perspective’

From a publisher perspective, there is much interest in the ways that research data is shared.  They are moving towards a model with greater transparency.  There are some services under development that will use DOI to link datasets and archives to improve the findability of research data.  For instance, the Geoscience Data Journal includes bi-direction linking to original data sets.  Ethical issues from a publisher point of view include how to record citations and accreditation; manage peer review and maintenance of security protocols.

Data sharing models may be open, restricted (e.g. dependent on permissions set by data owner) or linked (where the original data is not released but access can be managed centrally).

[Discussion of open licensing was conspicuously absent from this though this is perhaps to be expected from commercial publishers.]


Luciano Floridi, Prof. of Philosophy & Ethics of Information at The University of Oxford
‘Big Data, Small Patterns, and Huge Ethical Issues’

Data can be defined by three Vs: variety, velocity, and volume. (Options for a fourth have been suggested.)  Data has seen a massive explosion since 2009 and the cost of storage is consistently falling.  The only limits to this process are thermodynamics, intelligence and memory.

This process is to some extent restricted by legal and ethical issues.

Epistemological Problems with Big Data: ‘big data’ has been with us for a while generally should be seen as a set of possibilities (prediction, simulation, decision-making, tailoring, deciding) rather than a problem per se.  The problem is rather that data sets have become so large and complex that they are difficult to process by hand or with standard software.

Ethical Problems with Big Data: the challenge is actually to understand the small patterns that exist within data sets.  This means that many data points are needed as ways into a particular data set so that meaning can become emergent.  Small patterns may be insignificant so working out which patterns have significance is half the battle.  Sometimes significance emerges through the combining of smaller patterns.

Thus small patterns may become significant when correlated.  To further complicate things:  small patterns may be significant through their absence (e.g. the curious incident of the dog in the night-time in Sherlock Holmes).

A specific ethical problem with big data: looking for these small patterns can require thorough and invasive exploration of large data sets.  These procedures may not respect the sensitivity of the subjects of that data.  The ethical problem with big data is sensitive patterns: this includes traditional data-related problems such as privacy, ownership and usability but now also includes the extraction and handling of these ‘patterns’.  The new issues that arise include:

  • Re-purposing of data and consent
  • Treating people not only as means, resources, types, targets, consumers, etc. (deontological)

It isn’t possible for a computer to calculate every variable around the education of an individual so we must use proxies:  indicators of type and frequency which render the uniqueness of the individual lost in order to make sense of the data.  However this results in the following:

  1. The profile becomes the profiled
  2. The profile becomes predictable
  3. The predictable becomes exploitable

Floridi advances the claim that the ethical value of data should not be higher than the ethical value of that entity but demand at most the same degree of respect.

Putting all this together:  how can privacy be protected while taking advantage of the potential of ‘big data’?.  This is an ethical tension between competing principles or ethical demands: the duties to be reconciled are 1) safeguarding individual rights and 2) improving human welfare.

  • This can be understood as a result of polarisation of a moral framework – we focus on the two duties to the individual and society and miss the privacy of groups in the middle
  • Ironically, it is the ‘social group’ level that is served by technology

Five related problems:

  • Can groups hold rights? (it seems so – e.g. national self-determination)
  • If yes, can groups hold a right to privacy?
  • When might a group qualify as a privacy holder? (corporate agency is often like this, isn’t it?)
  • How does group privacy relate to individual privacy?
  • Does respect for individual privacy require respect for the privacy of the group to which the individual belongs? (big data tends to address groups (‘types’) rather than individuals (‘tokens’))

The risks of releasing anonymised large data sets might need some unpacking:  the example given was that during the civil war in Cote d’Ivoire (2010-2011) Orange released a large metadata set which gave away strategic information about the position of groups involved in the conflict even though no individuals were identifiable.  There is a risk of overlooking group interests by focusing on the privacy of the individual.

There are legal or technological instruments which can be employed to mitigate the possibility of the misuse of big data, but there is no one clear solution at present.  Most of the discussion centred upon collective identity and the rights that might be afforded an individual according to groups they have autonomously chosen and those within which they have been categorised.  What happens, for example, if a group can take a legal action but one has to prove membership of that group in order to qualify?  The risk here is that we move into terra incognito when it comes to the preservation of privacy.


Summary of Discussion

Generally speaking, it’s not enough to simply get institutional ethical approval at the start of a project.  Institutional approvals typically focus on protection of individuals rather than groups and research activities can change significantly over the course of a project.

In addition to anonymising data there is a case for making it difficult to reconstruct the entire data set so as to stop others from misuse.  Increasingly we don’t even know who learners are (e.g. MOOC) so it’s hard to reasonably predict the potential outcomes of an intervention.

The BERA guidelines for ethical research are up for review by the sounds of it – and a working group is going to be formed to look at this ahead of a possible meeting at the BERA annual conference.

Sociology & Big Data

Can sociological researchers make use of big data?  Should they? There’s something equivocal going on between the allure of massive data sets and the temptation to try and explain everything in terms of that data…

New Sociological Approach to Big Data » Sociology Lens.

Data Maps: The Next Level

An interesting post by Young Hahn on map hacking has given me some food for thought with respect to the redesign of the OER Evidence Hub.  This article led me to another, Take Control of Your Maps by Paul Smith.

If I was a better programmer I could probably put some of the ideas to work right away, but as I am not I’ll have to be content with trying to draw some general principles out instead:

  • The core web map UI paradigm is a continuous, pannable, zoomable surface
  • Open mapping tools are better because they allow for more creative use of data through hacks
  • Maps can be more than just complements to narrative:  they can provide narrative structure too
  • Pins highlight locations but also anonymise them:   use illustrations instead like in the Sherlock Holmes demo
  • Google Maps is good at what it does, but restrictive in some design respects and any manipulation is likely to play havoc with the APIs
  • “Users interact with your mapping application primarily through a JavaScript or Flash library that listens to user events, requests tiles from the map server, assembles tiles in the viewport, and draws additional elements on the map, such as popups, markers, and vector shapes.”
  • If you can style the geospatial elements then the user experience can be customised to a much greater degree
  • KML is a newer alternative to XML
  • Most mapping services make use of some Javascript

It’s worth considering making use of the tools provided by OGR to translate and filter data.  And here are some mapping libraries to check out:

Data Visualization for Humanities

Here are my notes from today’s OU seminar which offers a survey of major online humanities datasets and some of the tools available for visualising them.

Speakers:

  • Dr Elton Barker (Classical Studies Department, The Open University, and Alexander von Humboldt Foundation, Freie Universität Berlin)
  • Ms Mia Ridge (PhD student, The Open University, and Chair of the Museum Computer Group UK)

This seminar offers a survey of major online humanities datasets and some of the tools available for visualising them. Examples will be drawn from externally-funded projects such as Hestia, Google Ancient Places and Pelagios.

Amplification is important for data visualization:  it should enhance our understanding of the information presented.  John Snow’s map of cholera outbreaks (1854) showed that cholera outbreaks were localized around water pumping facilities.  Similarly, Florence Nightingale produced a diagram showing causes of mortality in the Crimean war (1857) and Charles Minard (1869) produced a figurative map of French losses during the Russian campaign.  In each of these cases, a lot of information is packed into an accessible visual representation.

Harry Beck’s original map of the London Underground (1931) moves on from simple geographical representation and uses a circuit-diagram inspired approach with clean horizontal and vertical lines, stripping out extraneous information.     It is an emminently usable representation of the Tube system.

Visualization is typically defined by the type of qualitative or quantitative data that is available.  Visual representations can be static or interactive, but both should help people to find the most important information or insights.  It’s worth thinking about the different variables will be selected and shown.

Types of visualizations (ManyEyes)

  • Bubble Chart
  • Pie Chart
  • Treemap
  • Line/stack graph
  • Word tree
  • Maps
  • Phrase net
  • Matrix chart
  • Scatterplots

Another (new) form is sentiment analysis.  Data visualisation can help to check data by highlighting outliers or unexpected information visually.  In the humanities, there are a number of textual analysis tools available, including entity recognition and nGrams.

There are also many flawed visualizations, both in terms of accuracy and depth.  They can also be responsible for obscuring problems with datasets, their consistency and how they were collected.  They may also over-simplify complex information, or force categories on data in order to render them capable of visualization.  Imposing the binary logic of computers onto research data can also lead to nonsensical results.  It’s also worth remembering the old adage that correlation does not imply causation.

Best practice in data visualization:

  • How effectively does the visualization support cognition and/or communication?
  • Spatial arrangements should make sense of variables
  • Be aware of the audience
  • Are you telling a story or letting people explore?
  • 80% of data visualization is cleaning of datasets.  One need to think about how to organise data.

GIS Hestia

This project explores the cultural geography of the ancient historian Herodotus, extracting placename data for visualization: places, settlements and natural features.  This leads to some complications (e.g. a single dot for a sea) and computers generally don’t deal with uncertainty or ‘fuzzy’ information all that well.  Network maps were used to show the connections between territories, showing, for example, that Greece is not the centre of the narrative, even though Herodotus was Greek.

Pelagios

Data shared between project partners enables the application of a range of API tools.  This allows for the exploration of relations between places through data; and conversely between data through places (e.g. heat mapping).

Data Visualization Tools

There’s a nice list of data visualization tools available on netmagazine.  I think it would be really interesting to experiment with a few of these in the context of the redevelopment of the OLnet Evidence Hub in the OER Hub project.

  • Modest Maps could be used instead of Google Maps to provide geolocation for organisations, input from TrackOER, etc.
  • Polymaps looks like it could do the same job but maybe while looking a bit nicer…
  • OpenLayers may have the best functionality while being a bit trickier to use
  • Google Chart API would allow us to create a range of dynamic visualizations
  • Raphaël is a Javascript library which supports the creation of custom charts
  • D3 offers similar functionality for a wider range of outputs: Voronoi diagrams, tree maps, circular clusters and word clouds
  • Crossfilter and Tangle allow you to create interactive visualizations and documents which can be manipulated in real time… I think these could be a really useful way to present information
  • Processing also offers interactivity, though it’s not clear to me how easy it would be to integrate these kinds of elements
  • Apparently R is good, but with a steep learning curve
  • Gephi is a good choice for a graph-based visualiser and data explorer

There’s a Tumblr page at http://vizualize.tumblr.com/ which looks worth checking out in more detail.

The Teacher’s Apple

Tess Pajaron sent me the following infographic which attempts to substantiate the claim that Apple is winning the battle for tech presence in the classroom and for use by educators.  You can see the full page at http://newsroom.opencolleges.edu.au/features/the-teachers-apple/.

https://i2.wp.com/newsroom.opencolleges.edu.au/wp-content/uploads/2012/11/OpenColleges-War-V2.png

The Teacher's Apple

Scoop.It Activity!

My Scoop.It activity has gone through the roof in the last 48 hours. I would normally expect maybe half a dozen rescoops in a week and I’ve had forty in the last two days. Here are just the email notifications from this morning…

Scoop.It Notifications

Scoop.It Notifications

The posts which seem to be getting the most attention are 50 top sources of free elearning courses and Transformative or just flashy educational tools? Between them, they perhaps sum up the interest and ambivalence that surrounds online education…

FRRIICT: Oxford Workshop

Framework for Responsible Research & Innovation in ICT (FRIICT) is an ESRC research project led by Marina Jirotka from the University of Oxford and Bernd Stahl from De Montfort University.  Last month I attended their inaugural workshop, entitled “Identifying and addressing ethical issues in technology-related social research”.

The overall aim of the project is to:

  • develop an in-depth understanding of ICT researchers’ ethical issues and dilemmas in conducting ICT research;
  • provide of a set of recommendations and good practice to be adopted by EPSRC and the community;
  • create a self sustaining ‘ICT Observatory’ serving as a community portal and providing access to all outputs of the project.

The workshop took place in Oxford and was attended by about thirty people, mainly technology researchers or social scientists who intend to use ICT to collect research data, as well as a couple of lawyers.

The weather in Oxford was glorious but the sessions were lively enough to ensure that people didn’t get too bored by sitting inside.  Most of the two days were given over to discussion of case studies and general discussion.    The two (fabricated) case studies I worked on with groups were:

1.) Digital Sensory Room for Hospices: a therapeutic, calming sensory environment incorporating music, light, colour, smell and touch and digital communication tools which may be particularly useful for patients who have difficulty with self-expression

2.) Smartnews Inc: a smartphone news app which personalises a feed based on crowdsourcing data from relevant Twitter communities

I won’t reproduce the deliberations here, but the feeling in the group I was in was that the first of these had very little research validity (unsupported assumptions); a number of methodological problems (how to measure quality of life?); and came across as a desperate attempt by a HCI researcher to find a problem to which technology could be the solution.  The second case provokes questions about how data is shared through a service like Twitter and what kind of notion of consent might be in operation with respect to the use and storage of personal data.

The basic approach that FRRIICT seems to be following at the moment is roughly as follows:

  1. Begin with a stakeholder analysis which identifies those who might be affected by a particular intervention
  2. Sketch out the relevant rights, responsibilities and issues of that stakeholder
  3. Work out how these issues might be addressed in the context of the project
  4. Deduce whether a protocol can be derived and applied in other cases
  5. Share

Getting people with a science or technology background to think ‘ethically’ can be quite challenging.  (I tried to sketch out a tool for doing this is in my paper on ethics and mobile learning.)  Researchers typically think of ethics in terms of compliance: as long as the research ethics committee approves a project, that’s good enough for them.  For many of them, this is their only formal encounter with ethics.  But contemporary researchers working with ICT need a better awareness of how technology works, and should think about the wider social impact of technology.   Nonetheless, from a researcher’s point of view, being able to justifiably describe the consent of stakeholders as ‘informed’ is still perhaps the most important part of ‘being ethical’.  The problem, it seems to me, is that reflecting on ethics is one thing, but as soon as you want to discuss or collectively analyse these issues it strikes me that you need at least a minimal grasp of concepts and vocabulary from moral philosophy.  Arguably, everyone has an implicit sense of notions like duty, consequence and the development of moral excellence.  But moral philosophy offers ways to bring these things out and make them explicit without reducing them to pseudo-scientific decision-making tools.  Ethics is not structured like a science (or a stakeholder analysis).  Hopefully FRIICT will help us to work out the most effective forms of ethical reflection in these research contexts.

Here’s some copy from the call for papers from the next workshop (to be held in September):

As technology progressively pervades all aspects of our lives, HCI researchers are engaging with increasingly sensitive contexts. Areas under scrutiny include the provision of appropriate technology access for those approaching the end of life, the design of a social network site for parents of babies in a Neonatal Intensive Care Unit, and the design of interactive memorials in post-genocide Rwanda. The ethical and methodological considerations generated by research in sensitive contexts can go well beyond those addressed by standard ethical approval processes in Computing Science departments and research groups. Such processes need time to catch up with the innovative areas which HCI research is engaging with.

The aim of this workshop is to bring together researchers and practitioners with a common interest in conducting HCI research in sensitive contexts. Examples of ‘sensitive contexts’ include working with potentially vulnerable individuals such as children, adults with disabilities and cloistered nuns, and working in communities affected by a traumatic event. By sharing their experiences and reflections, participants in the workshop will generate a collective understanding of the ethical issues surrounding HCI research in sensitive contexts. We hope that participants will subsequently use this understanding to inform the design of ethical review processes in their own research groups, and incorporate awareness of ethical considerations into research design.

67 interviews (grounded research) were carried out EPSRC management and researchers, NGOs, professional organisations in a preparatory phase of the research.  The researchers found the following:

  • There is a perception that ethics is not strictly speaking a part of ICT research
  • 2/3 of respondents believed that technology is value-neutral.  The other third believes that ‘social value’ plays a part in technology research
  • Most ICT researchers only think about ethics in terms of securing private data and acquiring informed consent for experiments involving human subjects
  • Many researchers feel that their responsibility is to come up with reliable results

Insights thus far:

  • We must recalibrate ‘long term’ and ‘generic’ research: debunk the idea that basic research takes place outside of society
  • Reposition foresight methodologies and make them more approachable
  • We need to refine definitions of ICT as well as acknowledging and meeting skepticism
  • Use cases and misuse cases are typically deterministic and contrived
  • We need to develop new scenarios within ICT research which are more relevant to emerging contexts: social media, geo-tagging, ‘big data’, etc.
  • How can systems be designed in such a way that they demonstrate appreciation of ethical issues?