The Women’s FIFA World Cup is underway in France. There is an official web site (link) to support the Tournament.
I have started to build a repository on GitHub (link) for the data generated by FIFA. I am using some very basic code for my RStudio record of the games played (link).
For my first look at the data, I have monitored: ball in play (in minutes); total game time (in minutes); and weather data. I am using FIFA as the accurate source of these data.
My three visualisations are:
Ball in Play by Country of Origin of Referee
Impact of Humidity on Game Time
Impact of Temperature on Game Time
I am hopeful that I will find lots of ways to explore the FIFA data. At the moment, I am particularly interested in game time played in minutes (a median time of 53 minutes after seven games). My Google Sheets (link) aims to share data from the Tournament and follows on from a format used in 2015 (link).
I have spent some time discussing learning in recent weeks.
In one of my conversations, I spoke with my daughter, Beth, about her interest in approriate technology.
I was really impressed with her sense of appropriate technology as small-scale technology. My subsequent reading led me to understand that appropriate technology “is simple enough that people can manage it directly and on a local level. Appropriate technology makes use of skills and technology that are available in a local community to supply basic human needs” (link). I think directness and localness are keys for me in thinking about transformational change.
This does involve profound listening. It requires a sensitivity to relationships of power. It also requires us as Susanto Basu and David Neil (1996) pointed out that technology diffuses slowly (link) and discussed more recently by Deborah Healey (2018) (link).
I believe these conversations will lead me to more reflections about the relationships between appropriate technologies and the pursuit and recognition of microlearning (link).
A few weeks ago, a friend was asked to present about datafication in performance analysis. This set me off thinking about the processes I had heard about and seen.
I started off revisiting Viktor Mayer-Schonberger and Kenneth Cukier’s 2013 book Big data: A revolution that will transform how we live, work, and think (link). In it they discussed at length Matthew Maury‘s career in the U.S. Navy’s Depot of Charts and Instruments. In their words “He saw patterns everywhere”. They added (2013:75) “He had a number of ‘computers’ – the job title of those who calculated data. He aggregated data. He looked for patterns and more efficient routes and sea-lanes.”
I liked their consideration of data in the light of Matthew’s journey all those years ago. They noted:
He was among the first to realise that “there is a special value in a huge corpus of data that is lacking in smaller amounts – a core tenet of big data”.
Astounding that it was done with pencil and paper and highlights “the degree to which the use of data predates digitization”.
Data refers to “a description of something that allows it to be recorded, analyzed, and reorganized”.
To datafy a phenomenon is “to put it in a quantified format so that it can be tabulated and analyzed”.
We built the building blocks for datafication many centuries before the dawn of the digital age.
I thought this account resonated powerfully with Simon Eaves’ accounts (2015, link; 2017a, link; 2017b, link) of Henry Chadwick (link) and baseball. Both are stories of digital pioneers. Simon notes that perhaps as early as 1858, Henry tried to record and analyse as “a first step towards a sport performance analysis to assess relative merits”.
I do think reading these authors about Matthew and Henry together gives real feel for what was occurring in the nineteenth century in the United States of America … at the dawn of what has been a remarkable process.
I hope to write more about this process and provide more background to datafication as the centuries pass by.