Making Sense of Data

grabffTwo blog posts by Jason Lear and Darrell Cobner sent me off thinking yesterday.
Jason asked Performance Analysis, is it drowning in raw useless data? Jason believes “significant issues exist with information management in sport, even to go as far as to suggest the world of elite sport is starting to go off course in so much as the management of performance data may not be appreciated in the context of establishing a target audience.”
In a thoughtful and thought-provoking post Jason observes:

those that develop a balanced information management system that identifies the value of specific performance data and filter such data to the correct targeted audience will be the ones that gain the most competitive advantage from performance analysis as it continues to evolve.

I liked Jason’s focus on ‘balance’ and ‘filter’ particularly in the context of grass roots sport.
Darrell responded with his post Is Performance Analysis drowning in raw useless data? In it he advocates a clear strategy for data management, balance and extraction of value.
Darrel and Jason’s posts sent me off to think about how such a strategy might be developed in advance of the pursuit of pervasive data. Fortunately two serendipitous opportunities yesterday gave me a focus.
The first was David Frame and Dáithí Stone’s paper in Nature Climate Change. In it they assess the first consensus statement on climate change:

In 1990, climate scientists from around the world wrote the First Assessment Report of the Intergovernmental Panel on Climate Change. It contained a prediction of the global mean temperature trend over the 1990–2030 period that, halfway through that period, seems accurate. This is all the more remarkable in hindsight, considering that a number of important external forcings were not included. So how did this success arise? In the end, the greenhouse-gas-induced warming is largely overwhelming the other forcings, which are only of secondary importance on the 20-year timescale.

The second was a BBC program shown on Australian Television, Fake or Fortune? In it Fiona Bruce, Philip Mould and Bendor Grosvenor examined the authenticity of three ‘discredited’ Turner paintings. I was very impressed by Philip Mould’s desire to clarify the provenance of the paintings. I thought Bendor Grosvenor’s forensic insights were exemplary. The program drew on the expertise of two curators too to provide detailed analysis of the composition of the discredited paintings. Fiona Bruce was the presenter of the story behind the story.
The upshot? Jason, Darrell, David, Dáithí,  Bendor, Philip and Fiona together offer us an excellent insight into how to collect and analyse data.
I am thinking that performance analysts have a great deal to learn about how to share an everyday story and how to break news of exceptional events. The assessment of the 1990 consensus statement predictions underscores how spending time at the start of a project has significant returns on the investment made in developing a strategy for data creation, curation and discovery.
Photo Credit
Frame grab Fake or Fortune.

2 COMMENTS

  1. Hi Keith, like where you have taken this. The share factor is what we as an industry/sector have to embrace more willingly. We should not fear others knowing but challenge ourselves to keep improving. I like to think of myself as someone willing to share but still find it challenging on occasions to share some resources or discuss some of the darker arts of my own practice. I percieve this to be my competitive advantage. Having said that my strongest client relationships have been built on open, transparent sharing of practice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here