My son, Sam, has just written a post about systems and networks (link). I found the post really interesting in a paternal sense and an epistemological sense.
The paternal part of me is delighted to read a blog post by Sam and to learn about his observations and reflections as a member of the #INF537 (link) Masters of Education (Knowledge Networks and Digital Innovation) online at Charles Sturt University.
The epistemological delight is in my commitment to self organising networks hinted at in Sam’s post. I have written a lot about networks (link) and have been thinking about these issues a great deal since the distributed, open course CCK08 (link), and becoming an accidental connectivist (link).
I am keen to persuade Sam privately and publicly to explore self organising networks (link) and to read more about Stephen Downes’ (link) and Alan Levine’s (link) work. I appreciate Sam’s particular working environment constraints (systemic) but am determined to explore the action possibilities he can address as a community driver and facilitate network flourishing within those constraints (link).
I sense that with energy anything is possible even in constrained contexts.
Earlier this week, Avinash Kaushik wrote about Responses to Negative Data (link). Shortly after his post was published, I found a link to a Turing Institute blog post, written by Franz Kiraly, What is a data scientific report? (link).
Both posts have helped me to think about the why, what and how of sharing observations, analyses and insights.
Franz, the author of the Turing blog post suggest that a stylised data report is characterised by:
Topic. Addresses a domain question or domain challenge in an application domain specific to a data set.
Aim. Data-driven answers to some domain question.
Audience. Decision-makers or domain experts interested in ‘evidence’ to inform decision-making.
Franz suggest five principles that inform good reporting:
Correctness and veracity
Clarity in writing
Reproducibility and transparency
Method and process
Application and context
Whilst there are some issues I take with Avinash’s and Franz’s posts, I do think they both raise some fundamental issues for us as we contemplate sharing our data-informed stories. I am particularly interested in how the curiosity and openness Avinash describes meets Franz’s five principles.
As I was concluding this post, up popped a link to Samuel Flender’s post How to be less wrong (link). This will be an excellent companion to the two posts discussed here. It also gives me an opportunity to extend my interest in Bayesian perspectives.
Researchers have some important decisions to make about the ways they share their discoveries.
Back in 2017, I was struck by Biecek Przemysław and Marcin Kosiński’s discussion of the use of the R package archivist (link). They discussed the opportunities we have to enable auditable and replicable analysis. Two years earlier, Data Carpentry facilitated a Reproducible Research in R workshop (link).
This week ,two finds have sent me off thinking about the explicit sharing of research journeys and discoveries.
The first find was Stencila, an open source project, that aims to make reproducible research more accessible (link). I noted that “Stencila provides a set of open-source software components enabling researchers to enable reproducible research … using interactive source code”.
I found Stencila through a link to Giuliano Maciocci, Michael Aufreiter and Nokome Bentley’s (2019) paper Introducing eLife’s first computationally reproducible article (link). This exemplifies the potential of a Reproducible Document Stack approach to open sharing. Researchers can use their existing word processing and spreadsheet tools and can embed R and Python code blocks that can generate live interactive plots using the Plotly.js library. Stencila uses the Mini formula language (link).
A second find, thanks to a Stephen Downes’ alert, was Alice Meadows, Laurel Haak and Josh Brown’s (2019) discussion of persistent identifiers (link). They note persistent identifiers “for people (researchers), places (their organizations) and things (their research outputs and other contributions) are foundational elements in the overall research information infrastructure”.
Supporting research includes supporting the research information infrastructure: the tools and services that researchers use which enable them to spend more time doing research and less time managing it – as well as the virtual building blocks on which those tools and services depend, such as metadata, standards and, the topic of this article, persistent identifiers (PIDs).
Meadows, Alice, Laurel L. Haak, and Josh Brown. 2019. “Persistent Identifiers: The Building Blocks of the Research Information Infrastructure”. Insights 32 (1): 9. DOI: http://doi.org/10.1629/uksg.457
I have mentioned before that one of my founding ideas for the International Journal of Performance Analysis in sport was to enable a paper in any language (with an English abstract or summary) that shared openly video and data resources. At that time the platforms available did not permit open sharing.
This week has brought back those memories of a global community sharing research journeys. It must be profoundly exciting entering the research community now or transforming existing practices as we become much more transparent about these journeys.