Open In Colab   Open in Kaggle

Tutorial 7: Introduction to Earth System Models#

Week 1, Day 5, Climate Modeling

Content creators: Brodie Pearson, Jenna Pearson, Julius Busecke, and Tom Nicholas

Content reviewers: Yunlong Xu, Will Gregory, Peter Ohue, Derick Temfack, Zahra Khodakaramimaghsoud, Peizhen Yang, Younkap Nina Duplex, Ohad Zivan, Chi Zhang

Content editors: Abigail Bodner, Ohad Zivan, Chi Zhang

Production editors: Wesley Banfield, Jenna Pearson, Chi Zhang, Ohad Zivan

Our 2023 Sponsors: NASA TOPS, Google DeepMind, and CMIP

Tutorial Objectives#

In this tutorial students will learn how to load, visualize, and manipulate data from an Earth System Model (ESM) to explore the spatial variations in each component of the surface heat flux.

By the end of this tutorial students will be able to:

  • Load data from the Community Earth System Model (CESM), which was used in the most recent Coupled Model Intercomparison Project (CMIP6)

  • Analyze the zonal-mean surface energy budget of a realistic climate model (i.e., the budget at each latitude)

  • Link variations in different surface heat flux to physical differences in the air-surface conditions across regions.

Setup#

# installations ( uncomment and run this cell ONLY when using google colab or kaggle )

# note the conda install takes quite a while, but conda is REQUIRED to properly download the 
# dependencies (that are not just python packages)

# !pip install condacolab &> /dev/null           
# import condacolab
# condacolab.install()

# crucial to install all packages in one line, otherwise code will fail.
# !mamba install xarray-datatree intake-esm gcsfs xmip aiohttp cartopy nc-time-axis cf_xarray xarrayutils &> /dev/null
# imports

# google colab users, if you get an error please run this cell again

import intake
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr

from xmip.preprocessing import combined_preprocessing
from xarrayutils.plotting import shaded_line_plot
from xmip.utils import google_cmip_col

from datatree import DataTree
from xmip.postprocessing import _parse_metric

import cartopy.crs as ccrs
# @title Figure Settings
import ipywidgets as widgets       # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/ClimateMatchAcademy/course-content/main/cma.mplstyle")
# @title Video 1: Introduction to Earth System Models

from ipywidgets import widgets
from IPython.display import YouTubeVideo
from IPython.display import IFrame
from IPython.display import display


class PlayVideo(IFrame):
  def __init__(self, id, source, page=1, width=400, height=300, **kwargs):
    self.id = id
    if source == 'Bilibili':
      src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'
    elif source == 'Osf':
      src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'
    super(PlayVideo, self).__init__(src, width, height, **kwargs)


def display_videos(video_ids, W=400, H=300, fs=1):
  tab_contents = []
  for i, video_id in enumerate(video_ids):
    out = widgets.Output()
    with out:
      if video_ids[i][0] == 'Youtube':
        video = YouTubeVideo(id=video_ids[i][1], width=W,
                             height=H, fs=fs, rel=0)
        print(f'Video available at https://youtube.com/watch?v={video.id}')
      else:
        video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,
                          height=H, fs=fs, autoplay=False)
        if video_ids[i][0] == 'Bilibili':
          print(f'Video available at https://www.bilibili.com/video/{video.id}')
        elif video_ids[i][0] == 'Osf':
          print(f'Video available at https://osf.io/{video.id}')
      display(video)
    tab_contents.append(out)
  return tab_contents


video_ids = [('Youtube', 'e3O9DmhE46Y'), ('Bilibili', 'BV1qh4y1G7EC')]
tab_contents = display_videos(video_ids, W=730, H=410)
tabs = widgets.Tab()
tabs.children = tab_contents
for i in range(len(tab_contents)):
  tabs.set_title(i, video_ids[i][0])
display(tabs)

Section 1: The Community Earth System Model (CESM)#

Throughout today’s tutorials, we have been working with increasingly complex climate models. In this final tutorial we will look at data from the most complex type of climate model, an Earth System Model (ESM). These ESMs include the physical processes typical of General Circulation Models (GCMs), but also include chemical and biological changes within the climate system (e.g. changes in vegetation, biomes, atmospheric \(CO_2\)).

The Community Earth System Model (CESM) is the specific ESM that we will analyze here in prepartion for next week where you will look at many ESM data sets simultaneously. We will be analyzing a historical simulation of CESM, which covers the period 1850 to 2015 using the historicallly observed forcing of the climate system.

Section 1.1: Finding & Opening CMIP6 Data with Xarray#

Massive projects like CMIP6 can contain millions of datasets. For most practical applications we only need a subset of the data, which we can select by specifying exactly which data sets we need. The naming conventions of CMIP6 data sets are standardized across all models and experiments, which allows us to load multiple related data sets with efficient code.

In order to load a CMIP6 dataset the following information must be specified:

  1. variable_id: The variable(s) of interest

    • in CMIP6 SST is called tos

  2. source_id: The CMIP6 model(s) that we want data from

  3. table_id: The origin system and output frequency desired of the variable(s)

    • We use Omon - data from the ocean model at monthly resolution

  4. grid_id: The grid that we want the data to be on

    • We use gn which is data on the model’s native grid, some models also provide gr (regridded data) and other grid options

  5. experiment_id: The CMIP6 experiments that we want to analyze

    • We will load three experiments: historical, ssp126 and ssp585. We’ll discuss these more in the next few tutorials

  6. member_id: this distinguishes simulations if the same model is run repeatedly for an experiment

    • We use r1i1p1f1 for now, but will explore this in a later tutorial

Each of these terms is called a facet in CMIP vocabulary. To learn more about CMIP and the possible facets please see our CMIP Resource Bank and the CMIP website.

Once you have defined the facets of interest you need to somehow search and retrieve the datasets that contain these facets.

There are many ways to do this, but here we will show a workflow using an intake-esm catalog object based on a CSV file that is maintained by the pangeo community. Additional methods to access CMIP data are discussed in our CMIP Resource Bank.

col = intake.open_esm_datastore(
    "https://storage.googleapis.com/cmip6/pangeo-cmip6.json"
)  # open an intake catalog containing the Pangeo CMIP cloud data
col
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[6], line 1
----> 1 col = intake.open_esm_datastore(
      2     "https://storage.googleapis.com/cmip6/pangeo-cmip6.json"
      3 )  # open an intake catalog containing the Pangeo CMIP cloud data
      4 col

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/intake_esm/core.py:107, in esm_datastore.__init__(self, obj, progressbar, sep, registry, read_csv_kwargs, columns_with_iterables, storage_options, **intake_kwargs)
    105     self.esmcat = ESMCatalogModel.from_dict(obj)
    106 else:
--> 107     self.esmcat = ESMCatalogModel.load(
    108         obj, storage_options=self.storage_options, read_csv_kwargs=read_csv_kwargs
    109     )
    111 self.derivedcat = registry or default_registry
    112 self._entries = {}

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/intake_esm/cat.py:264, in ESMCatalogModel.load(cls, json_file, storage_options, read_csv_kwargs)
    262         csv_path = f'{os.path.dirname(_mapper.root)}/{cat.catalog_file}'
    263     cat.catalog_file = csv_path
--> 264     df = pd.read_csv(
    265         cat.catalog_file,
    266         storage_options=storage_options,
    267         **read_csv_kwargs,
    268     )
    269 else:
    270     df = pd.DataFrame(cat.catalog_dict)

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/pandas/io/parsers/readers.py:912, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
    899 kwds_defaults = _refine_defaults_read(
    900     dialect,
    901     delimiter,
   (...)
    908     dtype_backend=dtype_backend,
    909 )
    910 kwds.update(kwds_defaults)
--> 912 return _read(filepath_or_buffer, kwds)

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/pandas/io/parsers/readers.py:577, in _read(filepath_or_buffer, kwds)
    574 _validate_names(kwds.get("names", None))
    576 # Create the parser.
--> 577 parser = TextFileReader(filepath_or_buffer, **kwds)
    579 if chunksize or iterator:
    580     return parser

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1407, in TextFileReader.__init__(self, f, engine, **kwds)
   1404     self.options["has_index_names"] = kwds["has_index_names"]
   1406 self.handles: IOHandles | None = None
-> 1407 self._engine = self._make_engine(f, self.engine)

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1661, in TextFileReader._make_engine(self, f, engine)
   1659     if "b" not in mode:
   1660         mode += "b"
-> 1661 self.handles = get_handle(
   1662     f,
   1663     mode,
   1664     encoding=self.options.get("encoding", None),
   1665     compression=self.options.get("compression", None),
   1666     memory_map=self.options.get("memory_map", False),
   1667     is_text=is_text,
   1668     errors=self.options.get("encoding_errors", "strict"),
   1669     storage_options=self.options.get("storage_options", None),
   1670 )
   1671 assert self.handles is not None
   1672 f = self.handles.handle

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/pandas/io/common.py:716, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
    713     codecs.lookup_error(errors)
    715 # open URLs
--> 716 ioargs = _get_filepath_or_buffer(
    717     path_or_buf,
    718     encoding=encoding,
    719     compression=compression,
    720     mode=mode,
    721     storage_options=storage_options,
    722 )
    724 handle = ioargs.filepath_or_buffer
    725 handles: list[BaseBuffer]

File ~/miniconda3/envs/climatematch/lib/python3.10/site-packages/pandas/io/common.py:373, in _get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode, storage_options)
    370         if content_encoding == "gzip":
    371             # Override compression based on Content-Encoding header
    372             compression = {"method": "gzip"}
--> 373         reader = BytesIO(req.read())
    374     return IOArgs(
    375         filepath_or_buffer=reader,
    376         encoding=encoding,
   (...)
    379         mode=fsspec_mode,
    380     )
    382 if is_fsspec_url(filepath_or_buffer):

File ~/miniconda3/envs/climatematch/lib/python3.10/http/client.py:482, in HTTPResponse.read(self, amt)
    480 else:
    481     try:
--> 482         s = self._safe_read(self.length)
    483     except IncompleteRead:
    484         self._close_conn()

File ~/miniconda3/envs/climatematch/lib/python3.10/http/client.py:631, in HTTPResponse._safe_read(self, amt)
    624 def _safe_read(self, amt):
    625     """Read the number of bytes requested.
    626 
    627     This function should be used when <amt> bytes "should" be present for
    628     reading. If the bytes are truly not available (due to EOF), then the
    629     IncompleteRead exception can be used to detect the problem.
    630     """
--> 631     data = self.fp.read(amt)
    632     if len(data) < amt:
    633         raise IncompleteRead(data, amt-len(data))

File ~/miniconda3/envs/climatematch/lib/python3.10/socket.py:705, in SocketIO.readinto(self, b)
    703 while True:
    704     try:
--> 705         return self._sock.recv_into(b)
    706     except timeout:
    707         self._timeout_occurred = True

File ~/miniconda3/envs/climatematch/lib/python3.10/ssl.py:1274, in SSLSocket.recv_into(self, buffer, nbytes, flags)
   1270     if flags != 0:
   1271         raise ValueError(
   1272           "non-zero flags not allowed in calls to recv_into() on %s" %
   1273           self.__class__)
-> 1274     return self.read(nbytes, buffer)
   1275 else:
   1276     return super().recv_into(buffer, nbytes, flags)

File ~/miniconda3/envs/climatematch/lib/python3.10/ssl.py:1130, in SSLSocket.read(self, len, buffer)
   1128 try:
   1129     if buffer is not None:
-> 1130         return self._sslobj.read(len, buffer)
   1131     else:
   1132         return self._sslobj.read(len)

KeyboardInterrupt: 

We just loaded the full collection of Pangeo cloud datasets into an intake catalog, and defined a list of 5 example models (‘source_ids’) for this demonstration. There are many more to test out! You could run col.df['source_id'].unique() in a new cell to get a list of all available models

Now we will create a subset according to the provided facets using the .search() method, and finally open the zarr stores in the cloud into xarray datasets.

The data returned are xarray datasets which contain dask arrays. These are ‘lazy’, meaning the actual data will only be loaded when a computation is performed. What is loaded here is only the metadata, which enables us to inspect the data (e.g. the dimensionality/variable units) without loading in GBs or TBs of data!

A subtle but important step in the opening stage is the use of a preprocessing function! By passing preprocess=combined_preprocessing we apply crowdsourced fixes from the xMIP package to each dataset. This ensures consistent naming of dimensions (and other convienient things - see here for more).

# from the full `col` object, create a subset using facet search
cat = col.search(
    source_id="CESM2",
    variable_id=[
        "hfls",
        "hfss",
        "rlds",
        "rlus",
        "rsds",
        "rsus",
        "tas",
        "rsdt",
        "rsut",
        "rlut",
    ],
    member_id="r1i1p1f1",
    table_id="Amon",
    grid_label="gn",
    experiment_id=["historical", "hist-nat"],
    require_all_on=[
        "source_id"
    ],  # make sure that we only get models which have all of the above experiments
)

# convert the sub-catalog into a datatree object, by opening each dataset into an xarray.Dataset (without loading the data)
kwargs = dict(
    preprocess=combined_preprocessing,  # apply xMIP fixes to each dataset
    xarray_open_kwargs=dict(
        use_cftime=True
    ),  # ensure all datasets use the same time index
    storage_options={
        "token": "anon"
    },  # anonymous/public authentication to google cloud storage
)

cat.esmcat.aggregation_control.groupby_attrs = ["source_id", "experiment_id"]
dt = cat.to_datatree(**kwargs)

Section 1.2: Checking the CMIP6 Datasets#

We now have a “datatree” containing the data we searched for. A datatree is a high-level container of xarray data, useful for organizing many related datasets together. You can think of a single DataTree object as being like a (nested) dictionary of xarray.Dataset objects. Each dataset in the tree is known as a “node” or “group”, and we can also have empty nodes. You can explore the nodes of the tree and its contents interactively in a similar way to how you can explore the contents of an xarray.Dataset. Click on the arrows to expand the information about the datatree below:

dt

Each group in the tree is stored under a corresponding name, and we can select nodes via their name. The real usefulness of a datatree comes from having many groups at different depths, analogous to how one might store files in nested directories (e.g. day1/experiment1/data.txt, day1/experiment2/data.txt etc.).

In our case the particular datatree object has different CMIP models and different experiments stored at distinct levels of the tree. This is useful because we can select just one experiment for one model, or all experiments for one model, or all experiments for all models!

We can also apply xarray operations (e.g. taking the average using the .mean() method) over all the data in a tree at once, just by calling that same method on the DataTree object. We can even map custom functions over all nodes in the tree using dt.map_over_subtree(my_function).

All the operations below can be accomplished without using datatrees, but it saves us many lines of code as we don’t have to use for loops over all our the different datasets. For more information about datatree see the documentation here.

Now, let’s pull out relevant CESM2 datasets from the datatree; the historical simulation (human & natural forcing) and the hist-nat simulation (natural forcing only).

# the historical including anthropogenic forcing
hist_dt = dt["CESM2"]["historical"]
hist_dt
# the historical run without anthropogenic forcing
hist_nat = dt["CESM2"]["hist-nat"]
hist_nat

Section 2: Surface Energy Balance#

Up until this point we have been analyzing budgets at the top of the atmosphere. Now we will move to Earth’s surface, where we will run into both familiar and unfamiliar terminology.

The first two terms we will analyze define the radiative component of the surface energy budget: shortwave and longwave radiation. For each of these terms, there is an upwelling and downwelling component. This is because, for example, some of the downwelling shortwave radiation is reflected back upwards due to the surface albedo. Similarly, some upwelling longwave radiation from Earth is absorbed by the atmosphere and re-emitted back towards the surface. The net radiative flux is given as

(17)#\[\begin{align} R_{net} = [R_{\uparrow S} - R_{\downarrow S}] + [R_{\uparrow L} - R_{\downarrow L}] \end{align}\]

where the arrows indicate up(down)welling and the \(S\) and \(L\) are shortwave and longwave. Note that in keeping with the rest of the variables we will look at, the net radiation is defined so that a positive value refers to an upward flux of energy from the ocean or land to the atmosphere. That is, positive indicates heat transport upwards and away from the surface.

We will also be looking at latent and sensible heat. Sensible heat is the heat transferred due to a temperature difference between touching materials, for example between the air and the land or ocean surface. In this case, if the surface air is warmer than the land/ocean then heat is transferred from the air to the land/ocean (which is a downward or negative sensible heat flux), with an opposite-direction transfer if the air was colder than the land/ocean.

Latent heat is due to evaporation and condensation of water, as these phase changes absorb and release heat respectively. Here ‘latent’ means hidden, in that the energy is stored in molecules and there is no temperature change sensed through a thermometer even though a heat transfer takes place.

While these are not the only terms that comprise the surface energy budget (i.e. geothermal heating, latent heat of fusion for melting ice or snow, biological processes) these are typically the dominant terms that set the global patterns.

Let’s take a look at the long_name of some variables we just downloaded.

(
    hist_dt.hfls.long_name,
    hist_dt.hfss.long_name,
    hist_dt.rlds.long_name,
    hist_dt.rlus.long_name,
    hist_dt.rsds.long_name,
    hist_dt.rsus.long_name,
)
# predefine heat fluxes for each experiement, take annual means

# model output variables (.squeeze() removes singleton dimensions)
hist_am_latent_heat_flux = hist_dt.ds.hfls.mean(dim="time").squeeze()
hist_am_sensible_heat_flux = hist_dt.ds.hfss.mean(dim="time").squeeze()
hist_am_downwelling_longwave_flux = hist_dt.ds.rlds.mean(dim="time").squeeze()
hist_am_upwelling_longwave_flux = hist_dt.ds.rlus.mean(dim="time").squeeze()
hist_am_downwelling_shortwave_flux = hist_dt.ds.rsds.mean(dim="time").squeeze()
hist_am_upwelling_shortwave_flux = hist_dt.ds.rsus.mean(dim="time").squeeze()

# derived ariables
hist_am_net_shortwave_heat_flux = (
    hist_am_upwelling_shortwave_flux - hist_am_downwelling_shortwave_flux
)
hist_am_net_longwave_heat_flux = (
    hist_am_upwelling_longwave_flux - hist_am_downwelling_longwave_flux
)
hist_am_net_heat_flux = (
    hist_am_net_shortwave_heat_flux
    + hist_am_net_longwave_heat_flux
    + hist_am_latent_heat_flux
    + hist_am_sensible_heat_flux
)

Now we will plot the annual mean over the historical time period 1850-2015.

%matplotlib inline

fig, ([ax_latent, ax_shortwave], [ax_sensible, ax_longwave]) = plt.subplots(
    ncols=2, nrows=2, figsize=[12, 6], subplot_kw={"projection": ccrs.Robinson()}
)

# plot surface latent heat flux the first month of the historical period
hist_am_latent_heat_flux.plot(
    ax=ax_latent,
    x="lon",
    y="lat",
    transform=ccrs.PlateCarree(),
    vmin=-300,
    vmax=300,
    cmap="coolwarm",
    robust=True,
    cbar_kwargs={"label": "$W/m^2$"},
)
ax_latent.coastlines()
ax_latent.set_title("Latent Heat Flux")

# Repeat for sensible heat flux
hist_am_sensible_heat_flux.plot(
    ax=ax_sensible,
    x="lon",
    y="lat",
    transform=ccrs.PlateCarree(),
    vmin=-150,
    vmax=150,
    cmap="coolwarm",
    robust=True,
    cbar_kwargs={"label": "$W/m^2$"},
)
ax_sensible.coastlines()
ax_sensible.set_title("Sensible Heat Flux")

# Repeat for net shortwave radiative heat flux
hist_am_net_shortwave_heat_flux.plot(
    ax=ax_shortwave,
    x="lon",
    y="lat",
    transform=ccrs.PlateCarree(),
    vmin=-300,
    vmax=300,
    cmap="coolwarm",
    robust=True,
    cbar_kwargs={"label": "$W/m^2$"},
)
ax_shortwave.coastlines()
ax_shortwave.set_title("Net Upward Shortwave Flux")

# Repeat for net longwave radiative heat flux
hist_am_net_longwave_heat_flux.plot(
    ax=ax_longwave,
    x="lon",
    y="lat",
    transform=ccrs.PlateCarree(),
    vmin=-150,
    vmax=150,
    cmap="coolwarm",
    robust=True,
    cbar_kwargs={"label": "$W/m^2$"},
)
ax_longwave.coastlines()
ax_longwave.set_title("Net Upward Longwave Flux")

Questions 2: Climate Connection#

  1. What do you think causes the large spatial variations of the sensible heat flux between strong positive, weak, and strong negative fluxes? Consider different surface types in your answer.

Section 3: Surface Energy Budget by Latitude#

We can also calculate a zonal average which allows us to compare the contributions of each of these fluxes to the net heat flux by latitude (similar to the plot in the last lecture that considered the RCE model prediction as a function of latitude).

To calculate a spatial average of a gridded data set, we often have to weight the data based on the size of the area it is describing. Fortunately, CESM data is on a regular latitude-longitude grid, which means that grid cells at a specific latitude have the same area as all the other grid cells at that latitude. This makes a zonal average easy, because at each latitude we can simply calculate the mean of all data at that latitude.

Note: Our averaging would have required area-weighting if we were calculating a global mean (as you did in previous Tutorials) or if you had irregularly gridded data (which we will encounter on W2D1)!

# find the zonal means (.squeeze() removes singleton dimensions)
hist_azm_latent_heat_flux = hist_am_latent_heat_flux.mean(dim="x").squeeze()
hist_azm_sensible_heat_flux = hist_am_sensible_heat_flux.mean(dim="x").squeeze()
hist_azm_net_shortwave_flux = hist_am_net_shortwave_heat_flux.mean(dim="x").squeeze()
hist_azm_net_longwave_flux = hist_am_net_longwave_heat_flux.mean(dim="x").squeeze()
hist_azm_net_heat_flux = hist_am_net_heat_flux.mean(dim="x").squeeze()
lat = hist_am_latent_heat_flux.lat[0, :]

fig, ax = plt.subplots(figsize=(8, 5))

ax.plot(lat, hist_azm_latent_heat_flux, label="Latent Heat")
ax.plot(lat, hist_azm_sensible_heat_flux, label="Sensible Heat")
ax.plot(lat, hist_azm_net_shortwave_flux, label="Shortwave")
ax.plot(lat, hist_azm_net_longwave_flux, label="Longwave")
ax.plot(lat, hist_azm_net_heat_flux, lw=3, color="k", label="Net")
ax.plot(lat, 0 * lat, color="black")

ax.set_title("Components of Annual Surface Energy Budget (+ up)")
ax.legend()

Questions 3: Climate Connection#

  1. Describe the dominant balance in the tropics (30S to 30N).

  2. What about for the polar regions (above 60N and below 60S).

  3. What do you think causes the dip in latent heat near the equator?

  4. Is there symmetry in the longwave radiation in the high southern and northern latitudes? What about for sensible heat?

Bonus Question: Climate Connection#

  1. Hypothetically, imagine this plot showed that the net heat flux was:

  • Negative 100 \(W m^{-2}\) between 45\(^oN\) to 45\(^oS\) (i.e., 90 degrees of latitude centered on the Equator) and,

  • Positive 100 \(W m^{-2}\) between 45\(^oN\) to 90\(^oN\) and between 45\(^oS\) to 90\(^oS\)

Would you expect Earth to warm, cool, or remain the same temperature? Why?

Summary#

In this tutorial, you learned to identify and access specific CMIP6 datasets, which is vital for handling the vast datasets generated by climate models. You analyzed data from a CESM simulation, focusing on how shortwave/longwave radiation, sensible and latent heat, contribute to the surface energy budget. This tutorial also explored the zonal-mean surface energy budget, elucidating how different surface heat flux components vary with latitude due to physical differences in air-surface conditions across regions.

Resources#

This tutorial uses data from the simulations conducted as part of the CMIP6 multi-model ensemble.

For examples on how to access and analyze data, please visit the Pangeo Cloud CMIP6 Gallery

For more information on what CMIP is and how to access the data, please see this page.