Here are my slides for today’s presentation at the ALT-C conference at The University of Warwick.
An interesting post by Young Hahn on map hacking has given me some food for thought with respect to the redesign of the OER Evidence Hub. This article led me to another, Take Control of Your Maps by Paul Smith.
If I was a better programmer I could probably put some of the ideas to work right away, but as I am not I’ll have to be content with trying to draw some general principles out instead:
- The core web map UI paradigm is a continuous, pannable, zoomable surface
- Open mapping tools are better because they allow for more creative use of data through hacks
- Maps can be more than just complements to narrative: they can provide narrative structure too
- Pins highlight locations but also anonymise them: use illustrations instead like in the Sherlock Holmes demo
- Google Maps is good at what it does, but restrictive in some design respects and any manipulation is likely to play havoc with the APIs
- If you can style the geospatial elements then the user experience can be customised to a much greater degree
- KML is a newer alternative to XML
It’s worth considering making use of the tools provided by OGR to translate and filter data. And here are some mapping libraries to check out:
Here are my notes from today’s OU seminar which offers a survey of major online humanities datasets and some of the tools available for visualising them.
- Dr Elton Barker (Classical Studies Department, The Open University, and Alexander von Humboldt Foundation, Freie Universität Berlin)
- Ms Mia Ridge (PhD student, The Open University, and Chair of the Museum Computer Group UK)
This seminar offers a survey of major online humanities datasets and some of the tools available for visualising them. Examples will be drawn from externally-funded projects such as Hestia, Google Ancient Places and Pelagios.
Amplification is important for data visualization: it should enhance our understanding of the information presented. John Snow’s map of cholera outbreaks (1854) showed that cholera outbreaks were localized around water pumping facilities. Similarly, Florence Nightingale produced a diagram showing causes of mortality in the Crimean war (1857) and Charles Minard (1869) produced a figurative map of French losses during the Russian campaign. In each of these cases, a lot of information is packed into an accessible visual representation.
Harry Beck’s original map of the London Underground (1931) moves on from simple geographical representation and uses a circuit-diagram inspired approach with clean horizontal and vertical lines, stripping out extraneous information. It is an emminently usable representation of the Tube system.
Visualization is typically defined by the type of qualitative or quantitative data that is available. Visual representations can be static or interactive, but both should help people to find the most important information or insights. It’s worth thinking about the different variables will be selected and shown.
Types of visualizations (ManyEyes)
- Bubble Chart
- Pie Chart
- Line/stack graph
- Word tree
- Phrase net
- Matrix chart
Another (new) form is sentiment analysis. Data visualisation can help to check data by highlighting outliers or unexpected information visually. In the humanities, there are a number of textual analysis tools available, including entity recognition and nGrams.
There are also many flawed visualizations, both in terms of accuracy and depth. They can also be responsible for obscuring problems with datasets, their consistency and how they were collected. They may also over-simplify complex information, or force categories on data in order to render them capable of visualization. Imposing the binary logic of computers onto research data can also lead to nonsensical results. It’s also worth remembering the old adage that correlation does not imply causation.
Best practice in data visualization:
- How effectively does the visualization support cognition and/or communication?
- Spatial arrangements should make sense of variables
- Be aware of the audience
- Are you telling a story or letting people explore?
- 80% of data visualization is cleaning of datasets. One need to think about how to organise data.
This project explores the cultural geography of the ancient historian Herodotus, extracting placename data for visualization: places, settlements and natural features. This leads to some complications (e.g. a single dot for a sea) and computers generally don’t deal with uncertainty or ‘fuzzy’ information all that well. Network maps were used to show the connections between territories, showing, for example, that Greece is not the centre of the narrative, even though Herodotus was Greek.
Data shared between project partners enables the application of a range of API tools. This allows for the exploration of relations between places through data; and conversely between data through places (e.g. heat mapping).
There’s a nice list of data visualization tools available on netmagazine. I think it would be really interesting to experiment with a few of these in the context of the redevelopment of the OLnet Evidence Hub in the OER Hub project.
- Modest Maps could be used instead of Google Maps to provide geolocation for organisations, input from TrackOER, etc.
- Polymaps looks like it could do the same job but maybe while looking a bit nicer…
- OpenLayers may have the best functionality while being a bit trickier to use
- Google Chart API would allow us to create a range of dynamic visualizations
- D3 offers similar functionality for a wider range of outputs: Voronoi diagrams, tree maps, circular clusters and word clouds
- Crossfilter and Tangle allow you to create interactive visualizations and documents which can be manipulated in real time… I think these could be a really useful way to present information
- Processing also offers interactivity, though it’s not clear to me how easy it would be to integrate these kinds of elements
- Apparently R is good, but with a steep learning curve
- Gephi is a good choice for a graph-based visualiser and data explorer
There’s a Tumblr page at http://vizualize.tumblr.com/ which looks worth checking out in more detail.
Here are the slides from my presentation at the Third Visual Learning Conference.
Tess Pajaron sent me the following infographic which attempts to substantiate the claim that Apple is winning the battle for tech presence in the classroom and for use by educators. You can see the full page at http://newsroom.opencolleges.edu.au/features/the-teachers-apple/.
Partly stemming from my recent exploration of OER visualization, I’ve started to take more of an interest in ways that philosophical ideas have been expressed visually. Now, obviously many artworks are themselves the expression of philosophical or religious sentiment, and I’m not so focused on there (valuable as they are). Rather, it’s the attempt to faithfully represent the structure of an argument, a text, or a genealogy in a way that is compelling and clear. I have a sense that when done well these could be very effective pedagogical aids, and I’m currently looking at ways in which software originally developed for those with special educational needs might be adapted for general use in teaching philosophy (especially over distance). You can have a look at my growing collection of philosophical visualizations on Pinterest. (Thanks to all those who responded to my request on Philos-L.)
I’ve been looking into different online visualization tools with a view to using the data collected by the OLnet project. (I’m also building up an inspiration gallery on Pinterest.) Here’s a provisional list with a few comments of my own. I’m not really going to look at word clouds (e.g. Wordle) as they’re a bit simple for my needs. I’d be interested in further suggestions or thoughts on these tools!
This service has a number of templates available, all of which seem to be open to a considerable degree of customization (including uploading your own graphical elements). A drag and drop canvas makes the process seem relatively painless, but it’s worth noting that you need to have a pretty good idea of what you want to say at the outset as there’s no analysis function on here as far as I can tell. Outputs are exported as image files.
Once you’re logged in, Infogram presents you with a range of poster template designs. After choosing a template, you can edits things like titles, charts and text but the basic elements seem to be permanent. Four types of chart are available: bar, pie, line, and matrix. You can generate these from spreadsheet information that you upload to the site. Once completed, these charts can be embedded in various ways.
Visual.ly supports the creation of infographics from Twitter hashtags or Twitter accounts. (I tried to generate one which described activity on the #oer hashtag but it wouldn’t seem to render.) It also seems to be a place where graphic designers share their work; I couldn’t see any obvious way to create some of the items in the ‘popular’ gallery using the tools available to me with my account. There are new tools in development, however, so perhaps I just don’t have access to these at the moment. Which is a shame, because there are some cools visualizations on here.
Again, one can upload data and have it turned into a visual form. This service seems to support geomapping through integration with Google Maps. It also seem to be set up to support collaborative working (like other parts of the Google family) and allows you to merge datasets, which could be interesting.
Gephi is open source software that you install on your machine. It looks like it lets you deal with quite complex data sets which are linked in various ways by manipulating different representations to bring out different aspects. It’s mainly configured for network analysis according to the examples provided, and it seems to be able to harvest data from social networks, which could make for some interesting mashups.
Manyeyes is an IBM research tool which allows you to upload spreadsheet information. Most people on the site seem to be using to create word clouds or simple charts, and I found a few OER related examples. It looks like the data has to conform to fairly strict protocols before visualizations will make sense.
There’s also a long list of tools like these at computerworld.com, but most of them look like they’re a bit techie for me.