Can you hear the sirens?

Presenting ambulance response patterns during a coronavirus surge using multimedia

1. Gathering data

I wanted to use the most granular version of data available to analyse time-series trends in ambulance incidents, so I searched for records of every single ambulance call in London during the past year. Though the UK’s National Health Service has comprehensive information on ambulance quality indicators, it only releases weekly statistics, so I had to look for a different country’s data.

2. Visualising the data

In the first plot below, two lines show the progression of total daily ambulance dispatches from mid-February to the end of April for both 2019 and 2020. Even without the legend, it may be obvious which year each line represents.

3. Can I display this data differently?

Finally, having presented and interpreted Queens’ ambulance data, let’s return to my original question of interest to discuss more unconventional ways of informing an audience about New York City’s crisis.

Accessibility

The very nature of data visualisation can exclude people with visual impairments all the way from those who are legally blind to those who are colourblind. If done well, data sonification can bring to life a line or area chart through only auditory means to allow users with low vision to explore time-series visualisations just like anyone else. For example, here’s a chilling sonification project by a Queen’s University Belfast professor describing the early stages of the COVID-19 pandemic:

The replication of human experiences

Again, only if done well, sonified data can add humanity to large datasets. This being my first time with this style of presentation, I used only basic methods to tell a story about ambulance incidents. Please let me know whether I’ve been successful!

  • First, I set the frequency limits (Hz) of the sonification to 300 (min) and 1550 (max) to approximate how emergency sirens typically sound.
  • Then, I placed background white noise behind the main sonification to both differentiate median response times below eight minutes (the aforementioned common standard for first responders) and emulate the sounds of walking along a busy street. With this dataset, I was lucky for the purposes of this work that the cut-off of white noise occurred exactly one day after NY’s stay-at-home order, meaning that someone going outside for groceries may have heard exactly what I’ve tried to depict: a quiet neighbourhood punctuated only by blaring sirens.
  • Finally, I chose to indicate greater median response times with higher frequencies because of the urgency that these frequencies convey.

Appendix: Notes on code and methodology

  • Here are the R packages I used to clean my data and produce the outputs shown in this article:
library(data.table) # to separate a date and time object into two columns for date and timelibrary(ggplot2) # to plot grouped line chartslibrary(gganimate) # to animate grouped line chartslibrary(ggthemes) # to make plots "look nice"library(sonify) # to sonify data
  • I used Canva to produce the YouTube video linked above, and Mac’s SketchBook app to produce the first ambulance sketch.
  • For ambulance response times, I chose the measure of median instead of mean because the former is less influenced by outliers (these were plentiful in the data — some ambulances took 20 minutes to arrive).
  • I excluded incidents with missing incident response time values, reducing my sample size slightly.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Yaning Wu

she/her. Population Health student @ UCL. Perpetual dataviz nerd. Published on Towards Data Science and UX Collective.