{ "cells": [ { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ClimateMatchAcademy/course-content/blob/main/tutorials/W1D5_ClimateModeling/student/W1D5_Tutorial7.ipynb)   \"Open" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Tutorial 7: Introduction to Earth System Models\n", "\n", "\n", "**Week 1, Day 5, Climate Modeling**\n", "\n", "**Content creators:** Brodie Pearson, Jenna Pearson, Julius Busecke, and Tom Nicholas\n", "\n", "**Content reviewers:** Yunlong Xu, Will Gregory, Peter Ohue, Derick Temfack, Zahra Khodakaramimaghsoud, Peizhen Yang, Younkap Nina Duplex, Ohad Zivan, Chi Zhang\n", "\n", "**Content editors:** Abigail Bodner, Ohad Zivan, Chi Zhang\n", "\n", "**Production editors:** Wesley Banfield, Jenna Pearson, Chi Zhang, Ohad Zivan\n", "\n", "**Our 2023 Sponsors:** NASA TOPS, Google DeepMind, and CMIP\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Tutorial Objectives\n", "\n", "In this tutorial students will learn how to load, visualize, and manipulate data from an Earth System Model (ESM) to explore the spatial variations in each component of the surface heat flux.\n", "\n", "By the end of this tutorial students will be able to:\n", "* Load data from the Community Earth System Model (CESM), which was used in the most recent [Coupled Model Intercomparison Project (CMIP6)](https://wcrp-cmip.org/cmip-phase-6-cmip6/)\n", "* Analyze the zonal-mean surface energy budget of a realistic climate model (i.e., the budget at each latitude)\n", "* Link variations in different surface heat flux to physical differences in the air-surface conditions across regions.\n" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 148771, "status": "ok", "timestamp": 1682000677034, "user": { "displayName": "Wesley Banfield", "userId": "15738596203150760123" }, "user_tz": -120 }, "tags": [ "colab" ] }, "outputs": [], "source": [ "# installations ( uncomment and run this cell ONLY when using google colab or kaggle )\n", "\n", "# note the conda install takes quite a while, but conda is REQUIRED to properly download the \n", "# dependencies (that are not just python packages)\n", "\n", "# !pip install condacolab &> /dev/null \n", "# import condacolab\n", "# condacolab.install()\n", "\n", "# crucial to install all packages in one line, otherwise code will fail.\n", "# !mamba install xarray-datatree intake-esm gcsfs xmip aiohttp cartopy nc-time-axis cf_xarray xarrayutils &> /dev/null" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 521, "status": "error", "timestamp": 1682000677547, "user": { "displayName": "Wesley Banfield", "userId": "15738596203150760123" }, "user_tz": -120 } }, "outputs": [], "source": [ "# imports\n", "\n", "# google colab users, if you get an error please run this cell again\n", "\n", "import intake\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import xarray as xr\n", "\n", "from xmip.preprocessing import combined_preprocessing\n", "from xarrayutils.plotting import shaded_line_plot\n", "from xmip.utils import google_cmip_col\n", "\n", "from datatree import DataTree\n", "from xmip.postprocessing import _parse_metric\n", "\n", "import cartopy.crs as ccrs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {} }, "outputs": [], "source": [ "# @title Figure Settings\n", "import ipywidgets as widgets # interactive display\n", "%config InlineBackend.figure_format = 'retina'\n", "plt.style.use(\"https://raw.githubusercontent.com/ClimateMatchAcademy/course-content/main/cma.mplstyle\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [] }, "outputs": [], "source": [ "# @title Video 1: Introduction to Earth System Models\n", "\n", "from ipywidgets import widgets\n", "from IPython.display import YouTubeVideo\n", "from IPython.display import IFrame\n", "from IPython.display import display\n", "\n", "\n", "class PlayVideo(IFrame):\n", " def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n", " self.id = id\n", " if source == 'Bilibili':\n", " src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n", " elif source == 'Osf':\n", " src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n", " super(PlayVideo, self).__init__(src, width, height, **kwargs)\n", "\n", "\n", "def display_videos(video_ids, W=400, H=300, fs=1):\n", " tab_contents = []\n", " for i, video_id in enumerate(video_ids):\n", " out = widgets.Output()\n", " with out:\n", " if video_ids[i][0] == 'Youtube':\n", " video = YouTubeVideo(id=video_ids[i][1], width=W,\n", " height=H, fs=fs, rel=0)\n", " print(f'Video available at https://youtube.com/watch?v={video.id}')\n", " else:\n", " video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n", " height=H, fs=fs, autoplay=False)\n", " if video_ids[i][0] == 'Bilibili':\n", " print(f'Video available at https://www.bilibili.com/video/{video.id}')\n", " elif video_ids[i][0] == 'Osf':\n", " print(f'Video available at https://osf.io/{video.id}')\n", " display(video)\n", " tab_contents.append(out)\n", " return tab_contents\n", "\n", "\n", "video_ids = [('Youtube', 'e3O9DmhE46Y'), ('Bilibili', 'BV1qh4y1G7EC')]\n", "tab_contents = display_videos(video_ids, W=730, H=410)\n", "tabs = widgets.Tab()\n", "tabs.children = tab_contents\n", "for i in range(len(tab_contents)):\n", " tabs.set_title(i, video_ids[i][0])\n", "display(tabs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "pycharm": { "name": "#%%\n" }, "tags": [ "remove-input" ] }, "outputs": [], "source": [ "# @markdown\n", "from ipywidgets import widgets\n", "from IPython.display import IFrame\n", "\n", "link_id = \"dn8hs\"\n", "\n", "download_link = f\"https://osf.io/download/{link_id}/\"\n", "render_link = f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\"\n", "# @markdown\n", "out = widgets.Output()\n", "with out:\n", " print(f\"If you want to download the slides: {download_link}\")\n", " display(IFrame(src=f\"{render_link}\", width=730, height=410))\n", "display(out)" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 1: The Community Earth System Model (CESM)" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Throughout today's tutorials, we have been working with increasingly complex climate models. In this final tutorial we will look at data from the most complex type of climate model, an Earth System Model (ESM). These ESMs include the physical processes typical of General Circulation Models (GCMs), but also include chemical and biological changes within the climate system (e.g. changes in vegetation, biomes, atmospheric $CO_2$).\n", "\n", "The [Community Earth System Model](https://www.cesm.ucar.edu/models/cesm2) (**CESM**) is the specific ESM that we will analyze here in prepartion for next week where you will look at many ESM data sets simultaneously. We will be analyzing a **historical simulation** of CESM, which covers the period 1850 to 2015 using the historicallly observed forcing of the climate system.\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Section 1.1: Finding & Opening CMIP6 Data with Xarray\n", "\n", "Massive projects like [CMIP6](https://wcrp-cmip.org/cmip-phase-6-cmip6/) can contain millions of datasets. For most practical applications we only need a subset of the data, which we can select by specifying exactly which data sets we need. The naming conventions of CMIP6 data sets are standardized across all models and experiments, which allows us to load multiple related data sets with efficient code.\n", "\n", "In order to load a CMIP6 dataset the following information must be specified:\n", "1. ***variable_id***: The variable(s) of interest \n", " * in CMIP6 SST is called *tos*\n", "2. ***source_id***: The CMIP6 model(s) that we want data from \n", "3. ***table_id***: The origin system and output frequency desired of the variable(s) \n", " * We use *Omon* - data from the ocean model at monthly resolution\n", "4. ***grid_id***: The grid that we want the data to be on\n", " * We use *gn* which is data on the model's *native* grid, some models also provide *gr* (*regridded* data) and other grid options\n", "5. ***experiment_id***: The CMIP6 experiments that we want to analyze\n", " * We will load three experiments: *historical*, *ssp126* and *ssp585*. We'll discuss these more in the next few tutorials\n", "6. ***member_id***: this distinguishes simulations if the same model is run repeatedly for an experiment\n", " * We use *r1i1p1f1* for now, but will explore this in a later tutorial\n", "\n", "Each of these terms is called a *facet* in CMIP vocabulary. To learn more about CMIP and the possible facets please see our [CMIP Resource Bank](https://github.com/ClimateMatchAcademy/course-content/blob/main/tutorials/CMIP/CMIP_resource_bank.md) and the [CMIP website](https://wcrp-cmip.org/).\n", "\n", "Once you have defined the facets of interest you need to somehow search and retrieve the datasets that contain these facets. \n", "\n", "There are many ways to do this, but here we will show a workflow using an [intake-esm](https://intake-esm.readthedocs.io/en/stable/) catalog object based on a CSV file that is maintained by the pangeo community. Additional methods to access CMIP data are discussed in our [CMIP Resource Bank](https://github.com/ClimateMatchAcademy/course-content/blob/main/tutorials/CMIP/CMIP_resource_bank.md).\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "col = intake.open_esm_datastore(\n", " \"https://storage.googleapis.com/cmip6/pangeo-cmip6.json\"\n", ") # open an intake catalog containing the Pangeo CMIP cloud data\n", "col" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "We just loaded the full collection of Pangeo cloud datasets into an intake catalog, and defined a list of 5 example models ('source_ids') for this demonstration. There are many more to test out! You could run `col.df['source_id'].unique()` in a new cell to get a list of all available models\n", "\n", "Now we will create a subset according to the provided facets using the `.search()` method, and finally open the [zarr stores]() in the cloud into xarray datasets. \n", "\n", " The data returned are xarray datasets which contain [dask arrays](https://docs.dask.org/en/stable/array.html). These are 'lazy', meaning the actual data will only be loaded when a computation is performed. What is loaded here is only the metadata, which enables us to inspect the data (e.g. the dimensionality/variable units) without loading in GBs or TBs of data!\n", "\n", "A subtle but important step in the opening stage is the use of a preprocessing function! By passing `preprocess=combined_preprocessing` we apply crowdsourced fixes from the [xMIP](https://github.com/jbusecke/xmip) package to each dataset. This ensures consistent naming of dimensions (and other convienient things - see [here](https://cmip6-preprocessing.readthedocs.io/en/latest/tutorial.html) for more)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 8712, "status": "ok", "timestamp": 1681772839862, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "# from the full `col` object, create a subset using facet search\n", "cat = col.search(\n", " source_id=\"CESM2\",\n", " variable_id=[\n", " \"hfls\",\n", " \"hfss\",\n", " \"rlds\",\n", " \"rlus\",\n", " \"rsds\",\n", " \"rsus\",\n", " \"tas\",\n", " \"rsdt\",\n", " \"rsut\",\n", " \"rlut\",\n", " ],\n", " member_id=\"r1i1p1f1\",\n", " table_id=\"Amon\",\n", " grid_label=\"gn\",\n", " experiment_id=[\"historical\", \"hist-nat\"],\n", " require_all_on=[\n", " \"source_id\"\n", " ], # make sure that we only get models which have all of the above experiments\n", ")\n", "\n", "# convert the sub-catalog into a datatree object, by opening each dataset into an xarray.Dataset (without loading the data)\n", "kwargs = dict(\n", " preprocess=combined_preprocessing, # apply xMIP fixes to each dataset\n", " xarray_open_kwargs=dict(\n", " use_cftime=True\n", " ), # ensure all datasets use the same time index\n", " storage_options={\n", " \"token\": \"anon\"\n", " }, # anonymous/public authentication to google cloud storage\n", ")\n", "\n", "cat.esmcat.aggregation_control.groupby_attrs = [\"source_id\", \"experiment_id\"]\n", "dt = cat.to_datatree(**kwargs)" ] }, { "cell_type": "markdown", "metadata": { "execution": {}, "jp-MarkdownHeadingCollapsed": true }, "source": [ "## Section 1.2: Checking the CMIP6 Datasets\n", "\n", "We now have a \"[datatree](https://xarray-datatree.readthedocs.io/en/latest/quick-overview.html)\" containing the data we searched for. A datatree is a high-level container of xarray data, useful for organizing many related datasets together. You can think of a single `DataTree` object as being like a (nested) dictionary of `xarray.Dataset` objects. Each dataset in the tree is known as a \"node\" or \"group\", and we can also have empty nodes. You can explore the nodes of the tree and its contents interactively in a similar way to how you can explore the contents of an `xarray.Dataset`. Click on the arrows to expand the information about the datatree below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 1263, "status": "ok", "timestamp": 1681700860145, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "dt" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Each group in the tree is stored under a corresponding `name`, and we can select nodes via their name. The real usefulness of a datatree comes from having many groups at different depths, analogous to how one might store files in nested directories (e.g. `day1/experiment1/data.txt`, `day1/experiment2/data.txt` etc.). \n", "\n", "In our case the particular datatree object has different CMIP models and different experiments stored at distinct levels of the tree. This is useful because we can select just one experiment for one model, or all experiments for one model, or all experiments for all models!\n", "\n", "We can also apply xarray operations (e.g. taking the average using the `.mean()` method) over all the data in a tree at once, just by calling that same method on the `DataTree` object. We can even map custom functions over all nodes in the tree using [`dt.map_over_subtree(my_function)`](https://xarray-datatree.readthedocs.io/en/latest/generated/datatree.map_over_subtree.html#datatree-map-over-subtree).\n", "\n", "All the operations below can be accomplished without using datatrees, but it saves us many lines of code as we don't have to use `for` loops over all our the different datasets. For more information about datatree see the [documentation here](https://xarray-datatree.readthedocs.io/en/latest/index.html).\n", "\n", "Now, let's pull out relevant CESM2 datasets from the datatree; the *historical* simulation (human & natural forcing) and the *hist-nat* simulation (natural forcing only)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 9, "status": "ok", "timestamp": 1681701678408, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "# the historical including anthropogenic forcing\n", "hist_dt = dt[\"CESM2\"][\"historical\"]\n", "hist_dt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 209, "status": "ok", "timestamp": 1681701679742, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "# the historical run without anthropogenic forcing\n", "hist_nat = dt[\"CESM2\"][\"hist-nat\"]\n", "hist_nat" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 2: Surface Energy Balance" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "\n", "Up until this point we have been analyzing budgets at the top of the atmosphere. Now we will move to Earth's surface, where we will run into both familiar and unfamiliar terminology. \n", "\n", "The first two terms we will analyze define the radiative component of the surface energy budget: shortwave and longwave radiation. For each of these terms, there is an upwelling and downwelling component. This is because, for example, some of the downwelling shortwave radiation is reflected back upwards due to the surface albedo. Similarly, some upwelling longwave radiation from Earth is absorbed by the atmosphere and re-emitted back towards the surface. The net radiative flux is given as \n", "\n", "\\begin{align}\n", "R_{net} = [R_{\\uparrow S} - R_{\\downarrow S}] + [R_{\\uparrow L} - R_{\\downarrow L}]\n", "\\end{align}\n", "\n", "where the arrows indicate up(down)welling and the $S$ and $L$ are shortwave and longwave. Note that in keeping with the rest of the variables we will look at, the net radiation is defined so that a positive value refers to an upward flux of energy from the ocean or land to the atmosphere. That is, positive indicates heat transport upwards and away from the surface.\n", "\n", "We will also be looking at latent and sensible heat. **[Sensible heat](https://glossary.ametsoc.org/wiki/Sensible_heat)** is the heat transferred due to a temperature difference between touching materials, for example between the air and the land or ocean surface. In this case, if the surface air is warmer than the land/ocean then heat is transferred from the air to the land/ocean (which is a downward or negative sensible heat flux), with an opposite-direction transfer if the air was colder than the land/ocean.\n", "\n", "**[Latent heat](https://glossary.ametsoc.org/wiki/Latent_heat)** is due to evaporation and condensation of water, as these phase changes absorb and release heat respectively. Here 'latent' means hidden, in that the energy is stored in molecules and there is no temperature change sensed through a thermometer even though a heat transfer takes place.\n", "\n", "While these are not the only terms that comprise the surface energy budget (i.e. geothermal heating, latent heat of fusion for melting ice or snow, biological processes) these are typically the dominant terms that set the global patterns.\n", "\n", "Let's take a look at the `long_name` of some variables we just downloaded.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 353, "status": "ok", "timestamp": 1681701687079, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "(\n", " hist_dt.hfls.long_name,\n", " hist_dt.hfss.long_name,\n", " hist_dt.rlds.long_name,\n", " hist_dt.rlus.long_name,\n", " hist_dt.rsds.long_name,\n", " hist_dt.rsus.long_name,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {} }, "outputs": [], "source": [ "# predefine heat fluxes for each experiement, take annual means\n", "\n", "# model output variables (.squeeze() removes singleton dimensions)\n", "hist_am_latent_heat_flux = hist_dt.ds.hfls.mean(dim=\"time\").squeeze()\n", "hist_am_sensible_heat_flux = hist_dt.ds.hfss.mean(dim=\"time\").squeeze()\n", "hist_am_downwelling_longwave_flux = hist_dt.ds.rlds.mean(dim=\"time\").squeeze()\n", "hist_am_upwelling_longwave_flux = hist_dt.ds.rlus.mean(dim=\"time\").squeeze()\n", "hist_am_downwelling_shortwave_flux = hist_dt.ds.rsds.mean(dim=\"time\").squeeze()\n", "hist_am_upwelling_shortwave_flux = hist_dt.ds.rsus.mean(dim=\"time\").squeeze()\n", "\n", "# derived ariables\n", "hist_am_net_shortwave_heat_flux = (\n", " hist_am_upwelling_shortwave_flux - hist_am_downwelling_shortwave_flux\n", ")\n", "hist_am_net_longwave_heat_flux = (\n", " hist_am_upwelling_longwave_flux - hist_am_downwelling_longwave_flux\n", ")\n", "hist_am_net_heat_flux = (\n", " hist_am_net_shortwave_heat_flux\n", " + hist_am_net_longwave_heat_flux\n", " + hist_am_latent_heat_flux\n", " + hist_am_sensible_heat_flux\n", ")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Now we will plot the annual mean over the historical time period 1850-2015." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 99914, "status": "ok", "timestamp": 1681701792787, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "fig, ([ax_latent, ax_shortwave], [ax_sensible, ax_longwave]) = plt.subplots(\n", " ncols=2, nrows=2, figsize=[12, 6], subplot_kw={\"projection\": ccrs.Robinson()}\n", ")\n", "\n", "# plot surface latent heat flux the first month of the historical period\n", "hist_am_latent_heat_flux.plot(\n", " ax=ax_latent,\n", " x=\"lon\",\n", " y=\"lat\",\n", " transform=ccrs.PlateCarree(),\n", " vmin=-300,\n", " vmax=300,\n", " cmap=\"coolwarm\",\n", " robust=True,\n", " cbar_kwargs={\"label\": \"$W/m^2$\"},\n", ")\n", "ax_latent.coastlines()\n", "ax_latent.set_title(\"Latent Heat Flux\")\n", "\n", "# Repeat for sensible heat flux\n", "hist_am_sensible_heat_flux.plot(\n", " ax=ax_sensible,\n", " x=\"lon\",\n", " y=\"lat\",\n", " transform=ccrs.PlateCarree(),\n", " vmin=-150,\n", " vmax=150,\n", " cmap=\"coolwarm\",\n", " robust=True,\n", " cbar_kwargs={\"label\": \"$W/m^2$\"},\n", ")\n", "ax_sensible.coastlines()\n", "ax_sensible.set_title(\"Sensible Heat Flux\")\n", "\n", "# Repeat for net shortwave radiative heat flux\n", "hist_am_net_shortwave_heat_flux.plot(\n", " ax=ax_shortwave,\n", " x=\"lon\",\n", " y=\"lat\",\n", " transform=ccrs.PlateCarree(),\n", " vmin=-300,\n", " vmax=300,\n", " cmap=\"coolwarm\",\n", " robust=True,\n", " cbar_kwargs={\"label\": \"$W/m^2$\"},\n", ")\n", "ax_shortwave.coastlines()\n", "ax_shortwave.set_title(\"Net Upward Shortwave Flux\")\n", "\n", "# Repeat for net longwave radiative heat flux\n", "hist_am_net_longwave_heat_flux.plot(\n", " ax=ax_longwave,\n", " x=\"lon\",\n", " y=\"lat\",\n", " transform=ccrs.PlateCarree(),\n", " vmin=-150,\n", " vmax=150,\n", " cmap=\"coolwarm\",\n", " robust=True,\n", " cbar_kwargs={\"label\": \"$W/m^2$\"},\n", ")\n", "ax_longwave.coastlines()\n", "ax_longwave.set_title(\"Net Upward Longwave Flux\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Questions 2: Climate Connection" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "1. What do you think causes the large spatial variations of the sensible heat flux between strong positive, weak, and strong negative fluxes? Consider different surface types in your answer." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 3: Surface Energy Budget by Latitude" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "We can also calculate a *zonal average* which allows us to compare the contributions of each of these fluxes to the net heat flux by latitude (similar to the plot in the last lecture that considered the RCE model prediction as a function of latitude).\n", "\n", "To calculate a spatial average of a gridded data set, we often have to weight the data based on the size of the area it is describing. Fortunately, CESM data is on a regular latitude-longitude grid, which means that grid cells at a specific latitude have the same area as all the other grid cells at that latitude. This makes a zonal average easy, because at each latitude we can simply calculate the mean of all data at that latitude. \n", "\n", "*Note: Our averaging would have required area-weighting if we were calculating a global mean (as you did in previous Tutorials) or if you had irregularly gridded data (which we will encounter on W2D1)!*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# find the zonal means (.squeeze() removes singleton dimensions)\n", "hist_azm_latent_heat_flux = hist_am_latent_heat_flux.mean(dim=\"x\").squeeze()\n", "hist_azm_sensible_heat_flux = hist_am_sensible_heat_flux.mean(dim=\"x\").squeeze()\n", "hist_azm_net_shortwave_flux = hist_am_net_shortwave_heat_flux.mean(dim=\"x\").squeeze()\n", "hist_azm_net_longwave_flux = hist_am_net_longwave_heat_flux.mean(dim=\"x\").squeeze()\n", "hist_azm_net_heat_flux = hist_am_net_heat_flux.mean(dim=\"x\").squeeze()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 34008, "status": "ok", "timestamp": 1681701864340, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "outputs": [], "source": [ "lat = hist_am_latent_heat_flux.lat[0, :]\n", "\n", "fig, ax = plt.subplots(figsize=(8, 5))\n", "\n", "ax.plot(lat, hist_azm_latent_heat_flux, label=\"Latent Heat\")\n", "ax.plot(lat, hist_azm_sensible_heat_flux, label=\"Sensible Heat\")\n", "ax.plot(lat, hist_azm_net_shortwave_flux, label=\"Shortwave\")\n", "ax.plot(lat, hist_azm_net_longwave_flux, label=\"Longwave\")\n", "ax.plot(lat, hist_azm_net_heat_flux, lw=3, color=\"k\", label=\"Net\")\n", "ax.plot(lat, 0 * lat, color=\"black\")\n", "\n", "ax.set_title(\"Components of Annual Surface Energy Budget (+ up)\")\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Questions 3: Climate Connection" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "1. Describe the dominant balance in the tropics (30S to 30N).\n", "2. What about for the polar regions (above 60N and below 60S).\n", "3. What do you think causes the dip in latent heat near the equator?\n", "4. Is there symmetry in the longwave radiation in the high southern and northern latitudes? What about for sensible heat?" ] }, { "cell_type": "markdown", "metadata": { "execution": {}, "executionInfo": { "elapsed": 167, "status": "ok", "timestamp": 1680721111796, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 } }, "source": [ "# Bonus Question: Climate Connection\n", "\n", "5. Hypothetically, imagine this plot showed that the net heat flux was:\n", " - Negative 100 $W m^{-2}$ between 45$^oN$ to 45$^oS$ (i.e., 90 degrees of latitude centered on the Equator) and,\n", " - Positive 100 $W m^{-2}$ between 45$^oN$ to 90$^oN$ and between 45$^oS$ to 90$^oS$\n", "\n", " Would you expect Earth to warm, cool, or remain the same temperature? Why? \n", " " ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Summary\n", "In this tutorial, you learned to identify and access specific [CMIP6](https://wcrp-cmip.org/) datasets, which is vital for handling the vast datasets generated by climate models. You analyzed data from a CESM simulation, focusing on how shortwave/longwave radiation, sensible and latent heat, contribute to the surface energy budget. This tutorial also explored the *zonal-mean* surface energy budget, elucidating how different surface heat flux components vary with latitude due to physical differences in air-surface conditions across regions." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Resources\n", "\n", "This tutorial uses data from the simulations conducted as part of the [CMIP6](https://wcrp-cmip.org/) multi-model ensemble. \n", "\n", "For examples on how to access and analyze data, please visit the [Pangeo Cloud CMIP6 Gallery](https://gallery.pangeo.io/repos/pangeo-gallery/cmip6/index.html) \n", "\n", "For more information on what CMIP is and how to access the data, please see this [page](https://github.com/ClimateMatchAcademy/course-content/blob/main/tutorials/CMIP/CMIP_resource_bank.md)." ] } ], "metadata": { "colab": { "collapsed_sections": [], "include_colab_link": true, "name": "W1D5_Tutorial7", "provenance": [], "toc_visible": true }, "kernel": { "display_name": "Python 3", "language": "python", "name": "python3" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 4 }