I have been thinking about patterns of performance in team games.
One of the catalysts for this has been my participation in a Rugby Coaches’ Conference last week on the Ile de Bendor. My focus there was on winning cultures and winning coaches.
In my preparation for the conference, I looked carefully at data prepared by James Grayson. I was fascinated by his account (2014) of Chelsea’s performance from 2000 to the present day. This led me to data James shared in 2012 about Manchester United.
An academic paper and a blog post have helped me think more about patterns and their visualisation.
In the paper, Bruno Travassos and his colleagues (2013) explore how advances in ecological dynamics can transform the analysis of performance. This exploration includes these points:
- Ecological dynamics conceives sport performance as a continuous process of co- adaptation between players in space and time to identify the most functional possibilities for action. (p. 85)
- in order to understand the performance of a team or player there is a need to investigate how players and teams manage the relations with teammates and opponents in space and time during emergence of patterns of play at different levels (p. 85).
Bruno and his colleagues propose:
an alternative approach of performance analysis in team sports should provide not only an interpretation of ongoing interactions between players and teams, under an ecological dynamics standpoint, but also a description of game performance using notational and time-motion methods. It should enable analyses of competitive performance, not only in terms of what each team and players do (i.e., in a discrete way), but also how and why each team and players interacts to achieve performance goals (i.e., in a functional perspective). (p. 86)
the link between specific patterns of play with outcomes of performance needs to be made clearer in order to produce more functional information for managers and coaches about how players and teams successfully manage interpersonal spatial-temporal relations with teammates and opponents. (p. 90)
Many of my 1:1 conversations at the Conference were about sharing functional information. My thoughts about this information have been focussed on a team’s ranking in the year preceding current activity. I see performance in colour.
Here are three example of winning performances in rugby union and one from rugby league. These visualisations are my Genomes of performance. They are my macro records that help me look closely at the phenomenography of performance. A change in colour (particularly red and gold) encourages me to refine my focus on player, unit and team.
Toulon and Saracens won their respective Leagues in the regular 2014 season. Both had successful 2013 seasons. The profile for the Chiefs is for the regular Super 15 season in 2013 (after a successful 2012).
The Roosters’ profile is very interesting. A new head coach transformed the team into the Minor Premiership title winners. It was a gold season.
In my framework, in 2014 the Roosters will be a green (beating lower ranked teams from 2013) and red (losing to lower ranked teams from 2013) team as the title holders. This is how they are placed after eleven rounds of the competition before the start of the State of Origin Series:
As ranking is my macro indicator, I was delighted to find Ed Feng’s post (2014a) on insights into predicting the outcome of football games.
Ed discusses the approach used by Jan Lasek, Zoltán Szlávik, and Sandjai Bhulai. Their paper has been published most recently in 2013 but there is a 2009 version of the paper available too (both for the International Journal of Applied Pattern Recognition). Ed collaborated with Jan to predict the outcome of 979 games. Ed used his Power Rank (2014b) to identify the probable outcomes of these games based on all international football matches played from 14 July 2002 to 15 May 2014 and expressed them as mean squared error.
Jan and his colleagues discuss a number of measures in their paper including:
- FIFA’s international rankings.
- FIFA’s Women’s Rankings.
- Margin of victory.
- Wisdom of crowds.
- More games or recent games?
Ed notes in his wisdom of crowds discussions that:
The best method for predicting football matches was the Ensemble, which combined the predictions of the FIFA women’s rankings, EloRatings.net, The Power Rank and Least Squares. The improvement from aggregation was significant. The ensemble of 4 rankings had an error 4.3% lower than the average error of the 4 systems.
Bruno and Ed raise some fundamental issues about what kind of information might be shared with coaches and players … and how this might be shared.
The challenge in sharing winning probabilities is underscored in a paper by David Dormagen (2014). David presents a very clear account of a simulation model developed to predict the outcome of the 2014 FIFA World Cup.
David’s approach allows for the “integration of rating systems and rules where either no clear formula for a probability other than a win or loss exists or where the historical data is not enough to derive such a formula”. In addition “We are also able to combine the results from different rating methods with user-given weights without influencing other calculations, such as the calculation of the draw-probability, the adjustment of the win expectancy for home teams, or the calculation of the expected goals”.
After 100,000 iterations of his simulator, David identified the following outcome (% is the number of times a team had a certain rank in the tournament).
David’s simulation took me back to my analysis of the 64 Games in the 2010 World Cup. In these games:
- 145 goals were scored. 101 goals in the Group Stage. 44 goals in the Knockout Stages.
- 59 goals were scored in the first halves of games, 84 in the second halves and 2 in extra time.
- In 54 out of 57 games the team that scored first did not lose (the remaining games were 0-0 draws).
- Nigeria, Cameroon and Brazil were the only teams to score first and lose.
- In 51 out of 64 games the higher FIFA ranked team (May 2010) won.
- The higher ranked teams that lost were: Greece (v Korea), Serbia (v Ghana), Cameroon (v Japan), Spain (v Switzerland), France (v Mexico), Germany (v Serbia), Cameroon (v Denmark), France (v South Africa), Serbia (v Australia), Italy (v Slovakia), Denmark (v Japan), USA (v Ghana), Brazil (v Netherlands).
- There were two penalty shoot outs. The higher ranking team won both shoot outs. The team that took the first penalty in these shoot outs won. The team that scored the first penalty won in both games.
- 24 referees officiated at the World Cup.
- No yellow or red cards were given in the Germany v Spain semi-final.
David’s simulator has some very clear rules to guide the prospective simulation of the Tournament. My live analysis of the 2010 World Cup sought to make sense of emergent behaviour. As a result of my analysis I think I have some functional information to share with coaches to address issues raised by Bruno and his colleagues.
I am interested in how robust the experience of 2010 might be independent of all the variables that might seems important and despite the fragility of the FIFA Ranking system.
I see this as the opportunity to blend observations and analysis of performance with the ecological dynamics discussed by Bruno and his colleagues. Data offer a sense of invariant (stable) behaviour and early alert to variation in behaviour (for better or worse). I appreciate that an agile approach to performance analysis in the context of ecological dynamics does make for a fascinating approach to optimising performance.
I do like the metaphor of performance as an ecological system as well as the intellectual rigour of a dynamical approach to performance. I have written a number of posts in Clyde Street about ecology.
Opisthokonta is the group of organisms that includes both animals and fungi, but not plants. It is one of those things that seems really strange for the layman, but makes perfect sense for the biologist.
My sense of working with coaches and athletes is that awareness of ecological variation in performance has profound outcomes. Openness to different performance eco systems creates enormous opportunities for performance analysts to develop shared stories that become stronger as they refine the what and how of observation and analysis with real, rather than imagined, people.