News

Maintenance and repair as innovation? Software systems and the case of climate models

photo project

The following text was written by Matthias Heymann, Professor of Science Studies at Aarhus University, and reflects his presentation at our Closing Conference, “Repairing Technology – Fixing Society?” from 13-14 October 2022 in Luxemburg.


Interest in maintenance and repair is burgeoning. Many accounts conceive maintenance as the long neglected opposite of innovation, dedicated to keeping old technologies working rather than encouraging novelty and progress. Andrew Russell and Lee Vinsel, for example, attempt to overcome the obsession with innovation and direct attention to the normalcy “after innovation,” to maintenance and repair, which accomplish “neither difference, nor novelty, nor revolution, but rather the distinctive forms of work that go into keeping things the same.” (2018, 7). They deliberately construct what they call “a false dichotomy,” presenting innovation and maintenance as opposing technological practices. With this contribution, in contrast, I wish to explore links and intersections between innovation and maintenance/repair.

An obvious case is software development and engineering, as Nathan Ensmenger has pointed out. According to him, “maintenance represents the single most time consuming and expensive phase” in software development (2016, 2). In addition, Ensmenger suggests that software maintenance costs have represented between 50 percent and 70 percent of all total expenditure on software development. With my contribution, I will investigate maintenance and repair for a very idiosyncratic example of a software system, climate models. What does innovation in climate modeling mean and how is it linked to maintenance and repair? Given the high costs of maintenance, how does it play out in a field like climate modeling? Maintenance efforts and costs appear largely hidden behind a tidy façade of the important and very complex practice of climate modeling, simulation and projection.

I aim to develop the following arguments: First, climate model development involves iterative cycles of maintenance and repair that are crucial for innovation. More pointedly, stages in innovation produce changes and disarrangements in model systems, which reduce rather than improve model performance and require time-consuming and costly efforts of maintenance and repair. Second, as with other technologies, common and abundant maintenance and repair is largely hidden or even kept secret due to the fear of compromising scientific authority. Maintenance and repair in climate model development is commonly referred to as model tuning in the modeling community, a seemingly suspicious practice that is less about objective procedures and protocols than the experience, tacit knowledge and decisions of responsible scientists. Only very recently have climate modelers started to aim for more transparency about their practices.

Climate models and their purposes, use and contexts differ significantly from commercial software systems. One major difference is that, unlike commercial software, weather and climate models cannot be subject to forms of software “verification.” “Verification” refers to tests of a software system to guarantee that it correctly executes the prescribed tasks (even though verification is limited as well, especially for very complex software systems). This testing procedure reveals potential bugs and issues with the software, which determine subsequent repair and maintenance work until appropriate execution is “verified.” Such “verification” presupposes a mechanistic determination of software processes.

Weather and climate models are (partly) validated if they reproduce more or less accurately the empirical state of the target system. If validation tests produce insufficient results, “repair activities” take place, as for commercial software. Scientists usually call these repair activities tuning or calibration. On the other hand, it is not strictly possible to demarcate this type of “repair” from model development. Interestingly, these “repair” efforts become particularly cumbersome after submodel improvements (e.g. based on better physical understanding) are introduced into a model. Running the expanded (improved) model usually leads to worse model performance, followed by a very work- and time-intensive phase of model “repair” to push performance up again, a process that often takes six months.

Scientists have never liked to talk about their local development and repair practices and experiences, as they feel these practices might undermine models’ scientific status. Terms such as “tuning” or “tweaking” have usually been kept out of scientific papers or public presentations. Alternative terms such as “development” and “innovation” simply sound more like serious science than the complex process of model engineering. The huge amount of public and political attention received by climate modeling has taught climate scientists to use language cautiously. Only recently have a small number of climate scientists openly raised the issue, organized conferences about model tuning and published a few articles on the subject.

I will make use of these recent publications as well as conversations and interviews with climate modelers to present some details about the practice of “repairing” climate models. My contribution will look at historical examples of development and simulations of weather and climate models and discuss development practices that interchangeably involve development and repair and even recognize the need for repair as a resource. As repair requirements for climate models are less clear and involve much more openness than in commercial applications, they create spaces for experimental tinkering and adjustment to improve performance in iterative cycles.

I finally wish to discuss why in software system development and use there appears to be a particularly close link between innovation and maintenance/repair. While software systems have very specific characteristics compared to other technologies, I will argue that they are not fundamentally different. The major difference appears to be their complexity. The more complex technologies become, the more demanding the role of maintenance/repair and the closer the links between maintenance/repair and innovation. The unusual malleability and complexity of software products likely accelerates innovation and maintenance/repair cycles in comparison to other technologies. This argument suggests that with technological change, maintenance and repair have generally not been in decline, but rather the opposite is true – with rising complexity in technology, repair becomes more pertinent.

 

Andrew L. Russell, Lee Vinsel, After Innovation, Turn to Maintenance, Technology and Culture 59:1 (2018), pp. 1-25.

Nathan Ensmenger, When Good Software Goes Bad, The Surprising Durability of an Ephemeral Technology, presentation at the Maintainers conference, April 9, 2016.

 

 

 

Author(s)