11 Oct 2018

The story of SDYCORE LAB part 1

Like I’ve written about before, I have had a very strong interest in writing my own simple GCM/dynamical core. After several frustrating initial attempts, partly described here I switched strategy. Rather than attempting to reimplement someone else’s model from their description, I would start from the primitive equations and derive my own model, aiming for as much simplicity as possible in the system of equations.

By working in an isobaric (pressure) vertical coordinate, the equations become very simple, but implementing a non-flat surface becomes computationally hard, so I chose simplicity over realism here for the first try. The governing equations vere very neatly described in a 1974 paper by Akira Kasahara in Monthly Weather Review, called “Various Vertical Coordinate Systems Used for Numerical Weather Prediction”.

The set of equations consist of 4 prognostic equations for horizontal wind, temperature and surface pressure which are extrapolated forward in time, and 3 diagnostic equations for vertical pressure velocity, pressure velocity at the surface and geopotantial height. In addition we need upper and lower boundary conditions. I’ll get into the full derivation of the system of equations in a later post where I’ll also go into details of the numerical scheme.

While I was at the EGU this year, I started working on deriving the discrete equations with pen and paper, and implementing them in Python. The targeted case was the standard dry dynamical core benchmark described in the 1994 article by Held and Suarez. Now the entire thing was quite transparent to me, finally and I was able to implement the basic equations in a few days. Now began the quest for numerical stability (this will be the title of a post coming soon).

This stage took me about a month, and the book “Numerical Techniques for Global Atmospheric Models” was an invaluable resource, especially Chapters 3, 4 and 13. I tried different ways of obtaining linear and non-linear stability and suppressing computational modes, eventually getting stability over longer simulations.

This is where the project stands at the moment. On the way the numerical scheme has gotten a little more complicated than it started out as. But the 4th order dynamical core in the SDYCORE LAB repository on GitHub is basically functional and reproduces the basic structure of the Held-Suarez flow pattern. There is something weird going on in the vertical velocity towards the poles, possibly related to an error in the pole condition, which I will have to look into, and the code is not as readable as I would like. I’m going to do a major restructuring of the code to enhance both performance and reliabilit, and I’ll try to get the time for that relatively soon.


10 Oct 2018

The Simple Dynamical Core Lab is now in beta

The Simple Dynamical Core Lab (SDYCORE LAB) is a personal project to explore some numerical algorithms involved in primitive equations models of atmospheric dynamics in a simplified setting.

The primitive equations are a set of nonlinear partial differential equations governing the large scale flow of fluids on a rotating planet. All modern weather prediction and general circulation models have at their core a solver for the primitive equations, with or without extra approximations. This part is called the dynamical core and is the most performance-critical part of the model.

As a result, all dynamical cores (I could find) are written for maximum performance targeting modern supercomputers and clusters. In addition, the field of dynamical core implementation is more than 50 years old, meaning that the methods applied to the problem in modern cores are quite advanced, making them dificult to understand. Even the cores used in simplified models like PUMA are somewhat difficult to understand since they use the spectral transform method and are fully parallelized.

To begin with, stripping away parallelism, choosing simple numerical methods and writing the programs in an easy to understand way is the main goal. Later, I might consider bringing parallelism back in some versions of the cores I plan to develop. Serial performance has been a consideration, though, and I have used a combination of Numba JIT and Fortran to achieve this.

The first project of SDYCORE LAB is the simplest primitive equations solver I could think of. Using finite difference methods to solve the primitive equations with pressure as the vertical coordinate over a flat surface with no grid staggering in the horizontal. This core, called the isobaric dycore and found in this repository is functioning and in an advanced state of development. I will add a version using centered 2nd order finite differences in space and time written in Python and a version using 4th order centered differences in space and 2nd order in time. This version is written i Python and Fortran, but will be rewritten to eliminate the Fortran part. Both cores use explicit leapfrog time-stepping.

The next step is to rewrite the code of the first dynamical core to be more readable and to handle boundary conditions in a smoother fashion.

Plans for the future is to explore the effects of grid staggering, different formulations of the primitive equations, different vertical coordinate systems and different soultion algorithms. I have already decided that I want to implement a finite difference solver on a staggered grid, a spectral solver and finite difference solver in the vorticity-divergence formulation. Plans for the far future might involve implementing a finite volume solver or some other solver.

Important references, resources and inspirations for this project has been (in order of discovery):


01 Feb 2018

A really simple GCM part 1: A physicist meets modern Earth System Models.

Last spring, when I returned to work after my paternity leave, I felt frustrated that the GCM/ESM (CESM1) I’m using for my research is so damned complicated. It’s very hard to get any kind of feeling for how the massive model actually works. Of course I could spend forever reading the code and technical documentation, but the full complexity makes it difficult to grasp, and I would not have enough time to spend on actually doing my own research.

Don’t get me wrong, I like my model. It’s just that I’m a little bit uncomfortable with using models I don’t fully understand. I guess this is an aspect of a subtle cultural difference between meteorology and some other physical sciences [1]. Since I have about 6 years of training as a physicist I bring that cultural baggage with me into my climate science/meteorology work, and It’s making me uncomfortable with the “black-boxiness” of modern Earth System Models. In the physics fields I trained in you’re generalyy expected to build your own simulation code, and if you inherit some code you’re usually inheriting it from the person who built it and you’re expected to fully understand everything before using it. Needless to say, that’s not how it works in modern meteorology.


15 May 2017

Writing about climate risk in a tweetstorm

I’ve not commented on the controversy caused by the new New York Times columnist Bret Stephens writing about the risks following from our (lack of) understanding of climate change. Last week I wrote two tweetstorms about how I think the uncertainties in climate science should inform our understanding of climate risk.


12 May 2017

Klimamodellenes begrensninger

Jeg skrev i februar følgende tekst for Aftenposten viten: Klimamodellenese begrensninger

Bakgrunnen var at FrP-politiker Carl I. Hagen, med støtte fra organisasjonen Klimarealistene, hadde en offentlig raptus om klimavitenskap og global oppvarming. Mange andre var også med i debatten, med tildels mye spissere innlegg enn det jeg skrev, men jeg så en mulighet for å forsøke å få gjennom noe fagformidling oppi det hele.

Det er vanskelig å snakke om klimamodeller og deres forhold til observert temperaturutvikling. Det er ekstremt viktig å holde tunga rett i munnen og passe på å sammenligne likt med likt. Eller også vite når man ikke kan forvente overensstemmelse mellom modeller og observasjoner. Det er dette jeg ønsker å fokusere på i den lenkede teksten.

For ikke så lenge siden kom det en ny artikkel i Nature, (Medhaug m. fl. 2017; Reconciling controversies about the ‘global warming hiatus’) der de i svært stor detalj gjennomgår tilgjengelig data rundt “varmepausen” mellom 1998 og 2012. De har med en seksjon der de forsøker å få modellert og observert utvikling i hhv. CMIP5 og HadCRUT4 til å stemme overens. De får til det ved å justere for nettopp pådriv og naturlig variasjon som jeg skriver om i teksten i Aftenposten (figur 1).

Figure 5 from Medhaug m. fl.

Figure 5 fra Medhaug m.fl. (2017): Ensemble-mean global-mean surface air temperatures (GMSTs) taken from 84 simulations by 36 CMIP5 models (historical plus Representative Concentration Pathway (RCP) 8.5 after 2005) is shown (light blue line) with the 90% confidence interval (shading). The CMIP5 ensemble mean when adjusted with updated forcings (see text; intermediate blue line), and with updated forcings and corrected for Pacific variability using the analogue method described in Methods (dark blue line), are also shown. The observed surface air temperature in HadCRUT4 is shown as the light orange line, together with the 90% confidence interval (shading). The uncertainty is based on a 100-member ensemble of the HadCRUT4 dataset. The estimated true GMST based on HadCRUT4, but accounting for incomplete coverage and adjusted for blending of surface air temperature with sea surface temperature (SST) from the CMIP5 models, is shown as the dark orange line. The values are given relative to the mean of 1961–1990.

Viktigere enn å få modeller og observasjoner til å stemme perfekt overens er å holde fast på at det er helt naturlig, og ikke feil, at det ikke er overensstemmelse.


25 Jan 2017

From climate forecast to carbon budgets – Unfounded probabilities at the heart of climate science.

The stated goal of world climate policy is to avoid dangerous levels of global warming. This point is often, somewhat arbitrarily, defined as 2°C above preindustrial temperatures. In the context of the Paris agreement, the ambition is 1.5°C. No matter what the limit is, we need some way of knowing if we’re going to exceed the warming limit or not. The answer in the last years has been whats called cumulative carbon emission budgets or, often, just carbon budgets.

Cumulative carbon emission budgets are one of the most important and policy relevant results that come out of attempts to quantify future climate change. A carbon budget is an attempt to give information about how much greenhouse gases human society can emit without exceeding some target temperature (e.g. 2°C) to some accepted likelihood (e.g. 33%). This is a multi-faceted problem and there is no one correct way of making a budget, nor is there a single accepted target temperature or exceedance likelihood.

The free parameters when making a budget are: Which forcing agents to include in addition to CO2, which type of budget to make, which target warming to aim for at which time with which probability. All of these choices impact significantly on the size of the allowed budget.

To inform policy based on emission budgets, the general approach is to derive emission scenarios that are deemed societally possible and compatible with the budget. Here, another complication emerges: If the budget is exceeded to a moderate extent, the nature of the climate system’s response to CO2 makes it possible to remove excess CO2 after it was emitted and still hit the target temperatures. This makes the range of specific emission scenarios that are compatible with a given budget very large, and the choice of how to limit the scenarios considered plausible is important when giving policy advice based on emission budgets.

The freedom inherent in the process of formulating emission budgets combined with their seeming simplicity and ease of comparison to reserves of fossil fuels have made many groups see the potential of emission budgets as strategic communication tools, and there is potential for misuse with the aim of cynically subvert policy action. This means that when seeing emission budgets used in public discourse, a certain amount of skepticism is in order.

This is a serious problem in itself, but a more fundamental problem with the emission budget concept seems to be more-or-less unexplored: Do cumulative carbon emission budgets have a sound scientific foundation?

In this essay, I will explore an argument with the goal of showing that a specific type of carbon budgets derived in the IPCC 5th assessment report (AR5) (IPCC, 2013)⁠ are scientifically unfounded and that this argument also applies to other types of budgets. I will then briefly explore what this means for the scientific content of the carbon budgets and if different ways of making cumulative budgets can salvage the concept.


16 Jun 2016

Mitt møte med Kent Andersen på Twitter

Jeg hadde en lang twitter-utveksling med forfatter og FrP-politiker Kent Andersen på Twitter de siste dagene. Temaet var selvfølgelig klimavitenskap og klimaendringer. Ved hjelp av Storify kan utvekslingen sees under. Jeg skal etter hvert legge til kommentarer der jeg føler det er nødvendig.

Jeg tror at jeg i hovedsak skal la diskusjonen stå for seg selv.


09 May 2016

A thought experiment concerning back radiation

A common illustration of the greenhouse effect is the idealized greenhouse model. This model captures some important features of the earth as a system in radiative balance. This figure from the Wikipedia page shows the basic idea.

Idealized greenhouse model illustration from wikipedia

To solve the system in equilibrium, we can use the Stephan-Boltzmann law on each of the layers, modified by a non-unity emissivity for the atmosphere. Incoming solar and outgoing radiation have very little overlap in their spectra, so to an ok approximation, we can treat them as non-overlapping.

In this equilibrium system there are two opposite fluxes of longwave radiation energy, from the earth up and from the atmosphere down (the back radiation). This makes the net flux of energy away from the surface lower than it would have been at a given tempereature in the absence of the atmosphere, making the surface of the earth warmer to compensate.

Some context

Why is this important? In a lot of places on the internet, there are all sorts of discussions concerning the nature and existence of this back radiation, usually in the context of the second law of thermodynamics making the flux of energy from the atmosphere to the surface impossibl, or forbidding it heating the surface. See this series of posts at The Science of Doom for a thorough discussion.

Another discussion is about the physical existence of the back radiation as a separate flux of energy, or whether the prescence of the atmospheric emission of electromagnetic energy simply modulates the instantaneous radiation field leading to the net flux of upward energy. I have only recently come across this discussion in the comments at Terje Wahl’s blogg (in norwegian). This started me thinking, and I spend a good part of the previous weekend starting to devise a thought experiment to try to argue for the physical existence of this energy flux.

Conventional physics, as I was taught it in non-climate related thermodynamics, considers the back radiation to be real, but I like thinking about this kinds of thing.

In the following I’ll use the conventional definitions of shortwave and longwave radiation as and respectively

The thought experiment

At we place a perfectly absorbing and emitting infinite plate (i). Homogeneous monochromatic ultraviolet radtiation () is coming from an energy source at infinite distance from our setup, and radiating towards the plate at . At some finite distance , far from (i) is another plate (ii), perfectly transmitting for the incoming UV radiation and with constant absorptivity for all wavelengths except . Both plates are thin, perfectly conducting and with negligible heat capacity. Right underneath plate (i) is a layer with emissivity 0, so that no energy is lost in that direction. Between the plates, and outside the setup is perfect vacuum.

The distance between the two plates is large. So large that the transit time for radiation between them is much larger than the time it takes each plate to equilibrate with the incoming energy.

At time the system is in equilibrium, with incoming radiation balanced by thermal radiation from (i) and (ii). At time the absorptivity of (ii) is changed, to become closer to 1. As a consequence, the plate will quickly equilibrate to a new temperature and a new thermally radiating state. This change in the state of (ii) will not be detected by (i) for some time, since the information travels with the speed of light. At a later time the information reaches (i) which then equilibrates with the new state of (ii). This will then lead to a series of temporally distinct equilibrations to the new state in a feedback, until a new internal state of the system is realized in equilibrium with the incident radiation. This new internal state will contain a larger amount of energy.

An illustrative simulation of the time evolution of this system is shown in the figures below. The simulation code can be found here

Temperature evolution of the system described above. The steplike evolution towards equilibrium is caused by the long transit time for radiation between the plates

Figure 1: Temperature evolution of the system described above.

Outgoing radiation energy

Figure 2: Outgoing radiation outside plate (ii). In equilibrium, this is equal incoming energy.

What does this thought experiment tell us about the physical nature of the net transfer of energy from the surace to the greenhouse layer? The net transfer of energy happens between two plates that are isolated from knowing about the state change of the other for a long time, the information about this change will travel with the speed of light, and an instantaneous modulation of the net radiation field by this information is not a plausible explanation. In my opinion, the only reasonable way to describe the energy flow in this system is in terms of two energy fluxes leading to a net flux of energy aay from the surface.

When the transfer of radiation in the system can be considered instantaneous, the two views on the nature of the energy flux are equivalent. But the assumption of instantaneous transfer is only an approximation, so the same argument applies in principle in the normal idealized greenhouse model, as well as in the real atmosphere.


29 Apr 2016

Comparing the CMIP5 model temperatures to HadCRUT4

Often, I see the claim that the models used in the Coupled Model Intercomparison Project phase 5 (CMIP5) are overestimating global temperatures compared to what is observed. I wanted to check this myself, so I downloaded the CMIP5 global temperatures from 42 models and the HadCRUT4 global mean temperatures from the beacon of sound climate science www.wattsupwiththat.com and got to work. The HadCRUT4 data I downloaded are missing the last year. Maybe I’ll be bothered to go back and fix it later. I used pandas to open the excel data, and did some pre-processing. First I did a five-year rolling mean both data sets, then I subtracted the 20th century mean value of the HadCRUT data from bot datasets. The next step was to calculate the multimodel mean and standard deviation of the 42 CMIP5 time series.

Figure 1 shows the HadCRUT4 temperatures together with the mean and standard deviation of the CMIP5 data compared to the HadCRUT4 global mean surface temperature from the beginning of the data until 2005 relative to the 20th centure HadCRUT4 mean value. For the entire period, the HadCRUT4 is well within one standard deviation of the CMIP multimodel mean, which I would interpret to mean that the CMIP5 ensemble of models, on average, is able to simulate the temperature evolution reasonably well.


19 Apr 2016

Going to the EGU part 1

I’m attending the European Geosciences Union General Assembly conference this week. The event is huge and hosted in Vienna, something like 15000 geoscientists attend, and I have a poster to present this afternoon. After I have presented it, I’ll upload the possterr to figshare and link it here.


14 Mar 2016

Slaying dragons using Python and Pandas

I have been discussing climate on the internet the last week. This can be relatively tiring, since arguments that have been debunked and discredited many times over, keep getting recycled and reused. The problem is that someone not following the topic, might not know that a certain argument has been proven false many times before.

I will go into one particular example, that I come across almost every time i involve myself in discussions about climate. The trend in global warming since 1998. The argument goes as follows: Since 1998 there has been almost no/very little/negative global warming, then a reference to a sattellite dataset with a trendline drawn from 1998 until today. Often the set is cropped to start in 1998 as well to undermine the perception of the long-term trend. A typical example is this one from Climate Depot. My argument is that the trend from 1998 until today is not robust, a point that has probably been made many times, but that I wanted to make myself.

Remote Sensing System TLT trend analysis

I downloaded the Remote Sensing Systems (RSS) global mean lower troposphere temperature anomaly dataset (I wanted to use the UAH set, but I couldn’t get the site open that evening). Using Python and Pandas, I did a rudimentary trend analysis to see how robust the trend from 1998 until the end of the data set really is. The trend lines with different starting years are shown in Figure 1.

A few things are immediately clear: The trend over the entire data set is 0.13 K/decade. The year 1998 was a clear outlier (it was a strong El Ñino year). Trends calculated based on 1998 or the years immidiately prior to it are significantly smaller than the long term trend. Having an outlier at the beginning of the data set really skews the entire trend line a lot, (having one at the end is impacting the trend a lot as well, but the reasonable choice is to calculate the trend until the end of the data set.) and choosing an outlier as the starting point of the trend analysis is a powerful way of getting the result you want.

RSS trend lines

Figure 1: The RSS global mean lower troposphere temperature (TLT) with trendlines calculated based on different starting times.

Figure 2 shows the trend in K/decade starting from every month in the data set on a logarithmic y-axis. The point of this figure is to show that as we get to 1998 as the starting point, the trend falls off precipitously by almost two orders of magnitude, while rebounding almost immediately afterwards. Of course, as we get towards the end of the data set, the trend becomes increasingly meaningless, but the dip around 1998 powerfully shows the disingeniousness of choosing this year as the starting point of any trend analysis.

RSS trend is not robust

Figure 2: Decadal temperature trend bassed on each month in the RSS dataset. The trend calculated from 1998 is two orders of magnitude lower than the surrounding periods.

In summary

The trend in sattellite measured temperatures since 1998 is a loved argument by climate change deniers. I think it is seriously misleading to use 1998 as the starting point, since using this year does not give robust trends. Finally, has global warming slowed in the new milennium? Trends calculated based on times after 1998 are a bit lower than the long term trend, but increase continued and there was no return to the previous normal temperatures in later years. Thus 1998 looks a step change in global mean lower tropospherer temperatures. What will be interesting to see is what will happen over the next 10 years, since 2015-2016 will be another ENSO-driven outlier from the new higher baseline. Will we see a return to the smooth increase of the period before 1998 or are we now in a regime where global warming comes in the form of step changes every few ENSO cycles? It will be interesting to see.

I made a GitHub repository for this analysis, and it contains the data set I used, the code and the outputs, so take a look over there if you want.

The black metal band Solefald has a track called Sagateller on the album Black for Death, which inspired this blog post. One line of lyrics reads “Ignorance, the dragon never dies”.


12 Nov 2015

Gridded NetCDF plotting tool 1

I’m working with a full scale general circulation model with atmospheric chemistry, and I get output from it in the NetCDF format, which is in wide use in large parts of the geosciences community. The format is excellent as far as I’m concerned, but there is always the problem of turning those raw output files into figures and numbers that can be presented. It seems everyone are making their own custom codes to process the output, and so I started out for myself learning to interact with the file format.

Most of my programming experience is in Matlab and C++, but when I started this project I decided to switch to python, mainly due to the open source nature of that language, and it’s large community. So I started out thinking I would make a relativley general purpose plotting code for use with gridded NetCDF data. This is very much a work in progress, where I started out with a very naive structure, and this ended up being seriously clunky to use. I made all the figures in the poster in my previous post using the code, but I’m not very happy with it.

I would like to rewrite it completely using the xray library for python, which allows indexing directly on named axes, thus simplifying the bookkeeping of axes, which was a major headache as I tried to expand the functionality of my first code. The problem is finding time to do it in between actually using the code to do research, and all of the other stuff we all have to do. Again this ties into the fact that most researchers make custom ad hoc code, that they don’t maintain to a satisfactory degree. I actually want to make somthing that is eventually good enough that it could be a help for other students and researchers in y group, not just some ad hoc scripts.

Anyway, the code is available from my Github profile.


12 Nov 2015

Chemistry Climate Model Initiative Workshop 2015

I presented a poster, this October at the CCMI workshop i Frascati, outside of Rome in Italy. This was my first conference/workshop as part of my PhD studies, and I presented some preliminary results from my research project. I’ve made it avialable on my Figshare account

The CCMI is an activity coordinated by SPARC where the goal is to follow up the evaluation of chemistry climate models, like WACCM or CAM-Chem, tha can be used to study atmospheric chemistry by simulating in a consisten way, both the circulation and the chemisry of the atmoshere.

My research project, that I’ve been working on for some months now, consists in using the Whole Atmosphere Communtiy Climate Model (WACCM) to study the impact of volcanic eruptions on ozone chemistry in the stratosphere. I will try to use this blog to present my work as it is evolving.