tools

Guerrilla Research #elesig

https://i1.wp.com/upload.wikimedia.org/wikipedia/commons/thumb/6/69/Afrikaner_Commandos2.JPG/459px-Afrikaner_Commandos2.JPG

We don't need no stinking permissions....

Today I’m in the research laboratories in the Jennie Lee Building at The Institute of Educational Technology (aka work) for the ELESIG Guerrilla Research Event.  Martin Weller began the session with an outline of the kind of work that goes into preparing unsuccessful research proposals.  Using figures from the UK research councils he estimates that the ESRC alone attracts bids (which it does not fund) equivalent to 65 work years every year (2000 failed bids x 12 days per bid).   This work is not made public in any way and can be considered lost.

He then went on to discuss some different digital scholarship initiatives – like a meta educational technology journal based on aggregation of open articles; MOOC research by Katy Jordan; an app built at the OU; DS106 Digital Storytelling – these have elements of what is being termed ‘guerrilla research’.  These include:

  • No permissions (open access, open licensing, open data)
  • Quick set up
  • No business case required
  • Allows for interdisciplinarity unconstrained by tradition
  • Using free tools
  • Building open scholarship identity
  • Kickstarter / enterprise funding

Such initiatives can lead to more traditional forms of funding and publication; and the two at least certainly co-exist.  But these kinds of activities are not always institutionally recognised, giving rise to a number of issues:

  • Intellectual property – will someone steal my work?
  • Can I get institutional recognition?
  • Do I need technical skills?
  • What is the right balance between traditional and digital scholarship?
  • Ethical concerns about the use of open data – can consent be assumed?  Even when dealing with personal or intimate information?

Tony Hirst then took the floor to speak about his understanding of ‘guerrilla research’.  He divided his talk into the means, opportunity and motive for this kind of work.

First he spoke about the use of the commentpress WordPress theme to disaggregate the Digital Britain report so that people could comment online.  The idea came out of a tweet but within 3 months was being funded by the Cabinet Office.

In 2009 Tony produced a map of MP expense claims which was used by The Guardian.  This was produced quickly using open technologies and led to further maps and other ways of exploring data stories.  Google Ngrams is a tool that was used to check for anachronistic use of language in Downton Abbey.

In addition to pulling together recipes using open tools and open data is to use innovative codings schemes. Mat Morrison (@mediaczar) used this to produce an accession plot graph of the London riots.  Tony has reused this approach – so another way of approaching ‘guerrilla research’ is to try to re-appropriate existing tools.

Another approach is to use data to drive a macroscopic understanding of data patterns, producing maps or other visualizations from very large data sets, helping sensemaking and interpretation.  One important consideration here is ‘glanceability‘ – whether the information has been filtered and presented so that the most important data are highlighted and the visual representation conveys meaning successfully to the view.

Data.gov.uk is a good source of data:  the UK government publishes large amounts of information on open licence.  Access to data sets like this can save a lot of research money, and combining different data sets can provide unexpected results.  Publishing data sets openly supports this method and also allows others to look for patterns that original researchers might have missed.

Google supports custom searches which can concentrate on results from a specific domain (or domains) and this can support more targeted searches for data.  Freedom of information requests can also be a good source of data; publicly funded bodies like universities, hospitals and local government all make data available in this way (though there will be exceptions). FOI requests can be made through whatdotheyknow.com.  Google spreadsheets support quick tools for exploring data such as sliding filters and graphs.

OpenRefine is another tool which Tony has found useful.  It can cluster open text responses in data sets according to algorithms and so replace manual coding of manuscripts.   The tool can also be used to compare with linked data on the web.

Tony concluded his presentation with a comparison of ‘guerrilla research’ and ‘recreational research’. Research can be more creative and playful and approaching it in this way can lead to experimental and exploratory forms of research.  However, assessing the impact of this kind of work might be problematic.  Furthermore, going through the process of trying to get funding for research like this can impede the playfulness of the endeavour.

A workflow for getting started with this kind of thing:

  • Download openly available data: use open data, hashtags, domain searches, RSS
  • DBpedia can be used to extract information from Wikipedia
  • Clean data using OpenRefine
  • Upload to Google Fusion Tables
  • From here data can be mapped, filtered and graphed
  • Use Gephi for data visualization and creating interactive widgets
  • StackOverflow can help with coding/programming

(I have a fuller list of data visualization tools on the Resources page of OER Impact Map.)

Data Maps: The Next Level

An interesting post by Young Hahn on map hacking has given me some food for thought with respect to the redesign of the OER Evidence Hub.  This article led me to another, Take Control of Your Maps by Paul Smith.

If I was a better programmer I could probably put some of the ideas to work right away, but as I am not I’ll have to be content with trying to draw some general principles out instead:

  • The core web map UI paradigm is a continuous, pannable, zoomable surface
  • Open mapping tools are better because they allow for more creative use of data through hacks
  • Maps can be more than just complements to narrative:  they can provide narrative structure too
  • Pins highlight locations but also anonymise them:   use illustrations instead like in the Sherlock Holmes demo
  • Google Maps is good at what it does, but restrictive in some design respects and any manipulation is likely to play havoc with the APIs
  • “Users interact with your mapping application primarily through a JavaScript or Flash library that listens to user events, requests tiles from the map server, assembles tiles in the viewport, and draws additional elements on the map, such as popups, markers, and vector shapes.”
  • If you can style the geospatial elements then the user experience can be customised to a much greater degree
  • KML is a newer alternative to XML
  • Most mapping services make use of some Javascript

It’s worth considering making use of the tools provided by OGR to translate and filter data.  And here are some mapping libraries to check out:

Tools for Data Visualization

I’ve been looking into different online visualization tools with a view to using the data collected by the OLnet project.  (I’m also building up an inspiration gallery on Pinterest.)  Here’s a provisional list with a few comments of my own.  I’m not really going to look at word clouds (e.g. Wordle) as they’re a bit simple for my needs.  I’d be interested in further suggestions or thoughts on these tools!

easelly

This service has a number of templates available, all of which seem to be open to a considerable degree of customization (including uploading your own graphical elements).  A drag and drop canvas makes the process seem relatively painless, but it’s worth noting that you need to have a pretty good idea of what you want to say at the outset as there’s no analysis function on here as far as I can tell.  Outputs are exported as image files.

inforgram

Once you’re logged in, Infogram presents you with a range of poster template designs.  After choosing a template, you can edits things like titles, charts and text but the basic elements seem to be permanent.  Four types of chart are available: bar, pie, line, and matrix.  You can generate these from spreadsheet information that you upload to the site.  Once completed, these charts can be embedded in various ways.

visual.ly

Visual.ly supports the creation of infographics from Twitter hashtags or Twitter accounts.  (I tried to generate one which described activity on the #oer hashtag but it wouldn’t seem to render.)  It also seems to be a place where graphic designers share their work; I couldn’t see any obvious way to create some of the items in the ‘popular’ gallery using the tools available to me with my account.  There are new tools in development, however, so perhaps I just don’t have access to these  at the moment.  Which is a shame, because there are some cools visualizations on here.


Google Fusion Tables
Again, one can upload data and have it turned into a visual form.  This service seems to support geomapping through integration with Google Maps.  It also seem to be set up to support collaborative working (like other parts of the Google family) and allows you to merge datasets, which could be interesting.

GephiGephi is open source software that you install on your machine.  It looks like it lets you deal with quite complex data sets which are linked in various ways by manipulating different representations to bring out different aspects.  It’s mainly configured for network analysis according to the examples provided, and it seems to be able to harvest data from social networks, which could make for some interesting mashups.

manyeyes

Manyeyes is an IBM research tool which allows you to upload spreadsheet information.  Most people on the site seem to be using to create word clouds or simple charts, and I found a few OER related examples.  It looks like the data has to conform to fairly strict protocols before visualizations will make sense.

There’s also a long list of tools like these at computerworld.com, but most of them look like they’re a bit techie for me.