Like most people, I receive pull newsletters in my in box. I find these profoundly helpful. They often contain news of analytics that become tweets (link) or the stimulus for a blog post on Clyde Street. I have been using Mastodon occasionally (link).
John Danaher has written about algocracy (link). His paper prepared as a chapter in the Oxford Handbook on the Philosophy of Technology considers how algorithmic governance can be “emancipatory and enslaving”. John points out “advances in computing technology have created a technological infrastructure that permeates, shapes and mediates our everyday lives”. John explores “the unavoidable and seemingly ubiquitous use of computer-coded algorithms to understand and control the world in which we live”.
Laurence Goasduff looks at augmented analytics (link). He notes “augmented analytics uses machine learning and artificial intelligence techniques to automatically identify actionable insights”. Laurence’s post uses data from a Gartner Data and Analytics Summit poll on the potential transformative effect of augmented analytics (link). Gartner’s glossary shares this view of augmented analytics. It is:
“the use of enabling technologies such as machine learning and artificial intelligence to assist with data preparation, insight generation and insight explanation to augment how people explore and analyze data in analytics and business intelligence platforms. It also augments the expert and citizen data scientists by automating many aspects of data science, machine learning, and AI model development, management and deployment” (link).
The Seattle Stats Guy shared on Medium his account of predictive modelling in R (link). The paper links to a 2018 post (link) about the use of ARIMA and ETS in R for modelling purposes. There is also a link to another 2018 about a vocabulary for predictive modelling (link). There are, the Seattle Guy suggests, four important words:
- Stationarity (a time series that has a consistent mean, variance, and covariance).
- Autocorrelation (the correlation an observation has between itself and another observation in the time series).
- Differencing (can help stabilize the mean and remove stochastic trends).
Davide Bacciu and his colleagues have shared their tutorial into Deep Learning for Graphs (link). The tutorial seeks to introduce “the basic building blocks that can be combined to design novel and effective neural models for graphs”. I am mindful that i need to look more carefully at graphs as I contemplate the relationship between informatics and analytics in sport. David and his colleagues’ tutorial is most helpful in thus regard.
The volume of push material available is remarkable. Trying to follow links in newsletters is fascinating but requires a great deal of discipline.The examples shared here are meant to look at how we practice analytics and how we position ourselves ethically in the algorithm debate. Andrew Manley and Shaun Williams, amongst others, have looked at data and organisational surveillance (link). As John Danaher indicates, we need to think carefully about our use of machine learning and artificial intelligence. We need to consider our relationship to augmented intelligence as we adapt to the volumes of data available to us.