{ "cells": [ { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ClimateMatchAcademy/course-content/blob/main/tutorials/W2D1_FutureClimate-IPCCIPhysicalBasis/student/W2D1_Tutorial4.ipynb)   \"Open" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Tutorial 4: Quantifying Uncertainty in Projections\n", "\n", "**Week 2, Day 1, Future Climate: The Physical Basis**\n", "\n", "**Content creators:** Brodie Pearson, Julius Busecke, Tom Nicholas\n", "\n", "**Content reviewers:** Younkap Nina Duplex, Zahra Khodakaramimaghsoud, Sloane Garelick, Peter Ohue, Jenna Pearson, Derick Temfack, Peizhen Yang, Cheng Zhang, Chi Zhang, Ohad Zivan\n", "\n", "**Content editors:** Jenna Pearson, Ohad Zivan, Chi Zhang\n", "\n", "**Production editors:** Wesley Banfield, Jenna Pearson, Chi Zhang, Ohad Zivan\n", "\n", "**Our 2023 Sponsors:** NASA TOPS, Google DeepMind, and CMIP" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Tutorial Objectives\n", "\n", "In the previous tutorial, we constructed a *multi-model ensemble* using data from a diverse set of five CMIP6 models. We showed that the projections differ between models due to their distinct physics, numerics and discretizations. In this tutorial, we will calculate the uncertainty associated with future climate projections by utilizing this variability across CMIP6 models. We will establish a *likely* range of projections as defined by the IPCC. \n", "\n", "By the end of this tutorial, you will be able to \n", "- apply IPCC confidence levels to climate model data\n", "- quantify the uncertainty associated with CMIP6/ScenarioMIP projections.\n" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Setup\n", "\n", " \n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 21927, "status": "ok", "timestamp": 1683930403085, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 }, "tags": [ "colab" ] }, "outputs": [], "source": [ "# installations ( uncomment and run this cell ONLY when using google colab or kaggle )\n", "\n", "# !pip install condacolab &> /dev/null\n", "# import condacolab\n", "# condacolab.install()\n", "\n", "# # Install all packages in one call (+ use mamba instead of conda), this must in one line or code will fail\n", "# !mamba install xarray-datatree intake-esm gcsfs xmip aiohttp nc-time-axis cf_xarray xarrayutils &> /dev/null" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 3609, "status": "ok", "timestamp": 1683930517522, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 }, "tags": [] }, "outputs": [], "source": [ "# imports\n", "import time\n", "\n", "tic = time.time()\n", "\n", "import intake\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import xarray as xr\n", "\n", "from xmip.preprocessing import combined_preprocessing\n", "from xarrayutils.plotting import shaded_line_plot\n", "\n", "from datatree import DataTree\n", "from xmip.postprocessing import _parse_metric" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "executionInfo": { "elapsed": 738, "status": "ok", "timestamp": 1683930525181, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 }, "tags": [] }, "outputs": [], "source": [ "# @title Figure settings\n", "import ipywidgets as widgets # interactive display\n", "\n", "plt.style.use(\n", " \"https://raw.githubusercontent.com/ClimateMatchAcademy/course-content/main/cma.mplstyle\"\n", ")\n", "\n", "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "executionInfo": { "elapsed": 2, "status": "ok", "timestamp": 1683930525501, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 }, "tags": [] }, "outputs": [], "source": [ "# @title Helper functions\n", "\n", "# If any helper functions you want to hide for clarity (that has been seen before\n", "# or is simple/uniformative), add here\n", "# If helper code depends on libraries that aren't used elsewhere,\n", "# import those libaries here, rather than in the main import cell\n", "\n", "\n", "def global_mean(ds: xr.Dataset) -> xr.Dataset:\n", " \"\"\"Global average, weighted by the cell area\"\"\"\n", " return ds.weighted(ds.areacello.fillna(0)).mean([\"x\", \"y\"], keep_attrs=True)\n", "\n", "\n", "# Calculate anomaly to reference period\n", "def datatree_anomaly(dt):\n", " dt_out = DataTree()\n", " for model, subtree in dt.items():\n", " # for the coding exercise, ellipses will go after sel on the following line\n", " ref = dt[model][\"historical\"].ds.sel(time=slice(\"1950\", \"1980\")).mean()\n", " dt_out[model] = subtree - ref\n", " return dt_out\n", "\n", "\n", "def plot_historical_ssp126_combined(dt):\n", " for model in dt.keys():\n", " datasets = []\n", " for experiment in [\"historical\", \"ssp126\"]:\n", " datasets.append(dt[model][experiment].ds.tos)\n", "\n", " da_combined = xr.concat(datasets, dim=\"time\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [] }, "outputs": [], "source": [ "# @title Video 1: Quantifying Uncertainty in Projections\n", "\n", "from ipywidgets import widgets\n", "from IPython.display import YouTubeVideo\n", "from IPython.display import IFrame\n", "from IPython.display import display\n", "\n", "\n", "class PlayVideo(IFrame):\n", " def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n", " self.id = id\n", " if source == 'Bilibili':\n", " src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n", " elif source == 'Osf':\n", " src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n", " super(PlayVideo, self).__init__(src, width, height, **kwargs)\n", "\n", "\n", "def display_videos(video_ids, W=400, H=300, fs=1):\n", " tab_contents = []\n", " for i, video_id in enumerate(video_ids):\n", " out = widgets.Output()\n", " with out:\n", " if video_ids[i][0] == 'Youtube':\n", " video = YouTubeVideo(id=video_ids[i][1], width=W,\n", " height=H, fs=fs, rel=0)\n", " print(f'Video available at https://youtube.com/watch?v={video.id}')\n", " else:\n", " video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n", " height=H, fs=fs, autoplay=False)\n", " if video_ids[i][0] == 'Bilibili':\n", " print(f'Video available at https://www.bilibili.com/video/{video.id}')\n", " elif video_ids[i][0] == 'Osf':\n", " print(f'Video available at https://osf.io/{video.id}')\n", " display(video)\n", " tab_contents.append(out)\n", " return tab_contents\n", "\n", "\n", "video_ids = [('Youtube', 'YCUsMjDinrA'), ('Bilibili', 'BV1oj411o7bb')]\n", "tab_contents = display_videos(video_ids, W=730, H=410)\n", "tabs = widgets.Tab()\n", "tabs.children = tab_contents\n", "for i in range(len(tab_contents)):\n", " tabs.set_title(i, video_ids[i][0])\n", "display(tabs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "pycharm": { "name": "#%%\n" }, "tags": [ "remove-input" ] }, "outputs": [], "source": [ "# @markdown\n", "from ipywidgets import widgets\n", "from IPython.display import IFrame\n", "\n", "link_id = \"u5zrp\"\n", "\n", "download_link = f\"https://osf.io/download/{link_id}/\"\n", "render_link = f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\"\n", "# @markdown\n", "out = widgets.Output()\n", "with out:\n", " print(f\"If you want to download the slides: {download_link}\")\n", " display(IFrame(src=f\"{render_link}\", width=730, height=410))\n", "display(out)" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 1: Loading CMIP6 Data from Various Models & Experiments" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "First, lets load the datasets that we used in the previous tutorial, which spanned 5 models. We will use three CMIP6 experiments, adding the high-emissions (*SSP5-8.5*) future scenario to the *historical* and *SSP1-2.6* experiments used in the last tutorial.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 5674, "status": "ok", "timestamp": 1683930535902, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 }, "tags": [] }, "outputs": [], "source": [ "col = intake.open_esm_datastore(\n", " \"https://storage.googleapis.com/cmip6/pangeo-cmip6.json\"\n", ") # open an intake catalog containing the Pangeo CMIP cloud data\n", "\n", "# pick our five models and three experiments\n", "# there are many more to test out! Try executing `col.df['source_id'].unique()` to get a list of all available models\n", "source_ids = [\"IPSL-CM6A-LR\", \"GFDL-ESM4\", \"ACCESS-CM2\", \"MPI-ESM1-2-LR\", \"TaiESM1\"]\n", "experiment_ids = [\"historical\", \"ssp126\", \"ssp585\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 50598, "status": "ok", "timestamp": 1683930589572, "user": { "displayName": "Brodie Pearson", "userId": "05269028596972519847" }, "user_tz": 420 }, "tags": [] }, "outputs": [], "source": [ "# from the full `col` object, create a subset using facet search\n", "cat = col.search(\n", " source_id=source_ids,\n", " variable_id=\"tos\",\n", " member_id=\"r1i1p1f1\",\n", " table_id=\"Omon\",\n", " grid_label=\"gn\",\n", " experiment_id=experiment_ids,\n", " require_all_on=[\n", " \"source_id\"\n", " ], # make sure that we only get models which have all of the above experiments\n", ")\n", "\n", "# convert the sub-catalog into a datatree object, by opening each dataset into an xarray.Dataset (without loading the data)\n", "kwargs = dict(\n", " preprocess=combined_preprocessing, # apply xMIP fixes to each dataset\n", " xarray_open_kwargs=dict(\n", " use_cftime=True\n", " ), # ensure all datasets use the same time index\n", " storage_options={\n", " \"token\": \"anon\"\n", " }, # anonymous/public authentication to google cloud storage\n", ")\n", "\n", "cat.esmcat.aggregation_control.groupby_attrs = [\"source_id\", \"experiment_id\"]\n", "dt = cat.to_datatree(**kwargs)\n", "\n", "cat_area = col.search(\n", " source_id=source_ids,\n", " variable_id=\"areacello\", # for the coding exercise, ellipses will go after the equals on this line\n", " member_id=\"r1i1p1f1\",\n", " table_id=\"Ofx\", # for the coding exercise, ellipses will go after the equals on this line\n", " grid_label=\"gn\",\n", " experiment_id=[\n", " \"historical\"\n", " ], # for the coding exercise, ellipses will go after the equals on this line\n", " require_all_on=[\"source_id\"],\n", ")\n", "\n", "cat_area.esmcat.aggregation_control.groupby_attrs = [\"source_id\", \"experiment_id\"]\n", "dt_area = cat_area.to_datatree(**kwargs)\n", "\n", "dt_with_area = DataTree()\n", "\n", "for model, subtree in dt.items():\n", " metric = dt_area[model][\"historical\"].ds[\"areacello\"]\n", " dt_with_area[model] = subtree.map_over_subtree(_parse_metric, metric)\n", "\n", "# average every dataset in the tree globally\n", "dt_gm = dt_with_area.map_over_subtree(global_mean)\n", "\n", "for experiment in [\"historical\", \"ssp126\", \"ssp585\"]:\n", " da = dt_gm[\"TaiESM1\"][experiment].ds.tos\n", "\n", "dt_gm_anomaly = datatree_anomaly(dt_gm)" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 2: Quantifying Uncertainty in a CMIP6 Multi-model Ensemble\n", "\n", "Let's create a multi-model ensemble containing data from multiple CMIP6 models, which we can use to quantify our confidence in future projected sea surface temperature change under low- and high-emissions scenarios. \n", "\n", "**Your goal in this tutorial is to create a *likely* range of future projected conditions. The IPCC uncertainty language defines the *likely* range as the middle 66% of model results (ignoring the upper 17% and lower 17% of results)**" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "### Coding Exercise 2.1\n", "\n", "Complete the following code to display multi-model ensemble data with IPCC uncertainty bands:\n", "\n", "\n", "1. The multi-model mean temperature\n", "2. Shading to display the *likely* range of temperatures for the CMIP6 historical and projected data (include both *SSP1-2.6* and *SSP5-8.5*). *da_upper* and *da_lower* are the boundaries of this shaded region\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "for experiment, color in zip([\"historical\", \"ssp126\", \"ssp585\"], [\"C0\", \"C1\", \"C2\"]):\n", " datasets = []\n", " for model in dt_gm_anomaly.keys():\n", " annual_sst = (\n", " dt_gm_anomaly[model][experiment]\n", " .ds.tos.coarsen(time=12)\n", " .mean()\n", " .assign_coords(source_id=model)\n", " )\n", " datasets.append(\n", " annual_sst.sel(time=slice(None, \"2100\")).load()\n", " ) # the french model has a long running member for ssp126\n", " da = xr.concat(datasets, dim=\"source_id\", join=\"override\").squeeze()\n", " # Calculate the multi-model mean at each time within each experiment\n", " da.mean(...).plot(color=color, label=experiment, ax=ax)\n", " x = da.time.data\n", " # Calculate the lower bound of the likely range\n", " da_lower = da.squeeze().quantile(...)\n", " # Calculate the upper bound of the likely range\n", " da_upper = da.squeeze().quantile(...)\n", " ax.fill_between(x, da_lower, da_upper, alpha=0.5, color=color)\n", "ax.set_title(\n", " \"Global Mean SST Anomaly from five-member CMIP6 ensemble (base period: 1950 to 1980)\"\n", ")\n", "ax.set_ylabel(\"Global Mean SST Anomaly [$^\\circ$C]\")\n", "ax.set_xlabel(\"Year\")\n", "ax.legend()" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "### Questions 2.1: Climate Connection\n", "\n", "1. What does this figure tell you about how the multi-model uncertainty compares to projected physical changes in the global mean SST? \n", "2. Is this the same for both scenarios?\n", "3. For a 5-model ensemble like this, how do the *likely* ranges specifically relate to the 5 individual model temperatures at a given time?" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Summary\n", "In this tutorial, we have quantified the uncertainty of future climate projections by analyzing variability across a multi-model CMIP6 ensemble. We learned to apply the IPCC's confidence levels to establish a *likely* range of projections, which refers to the middle 66% of model results. " ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Resource\n", "\n", "This tutorial uses data from the simulations conducted as part of the [CMIP6](https://wcrp-cmip.org/) multi-model ensemble. \n", "\n", "For examples on how to access and analyze data, please visit the [Pangeo Cloud CMIP6 Gallery](https://gallery.pangeo.io/repos/pangeo-gallery/cmip6/index.html) \n", "\n", "For more information on what CMIP is and how to access the data, please see this [page](https://github.com/ClimateMatchAcademy/course-content/blob/main/tutorials/CMIP/CMIP_resource_bank.md)." ] } ], "metadata": { "colab": { "collapsed_sections": [], "include_colab_link": true, "machine_shape": "hm", "name": "W2D1_Tutorial_4", "provenance": [ { "file_id": "1WfT8oN22xywtecNriLptqi1SuGUSoIlc", "timestamp": 1680298239014 } ], "toc_visible": true }, "gpuClass": "standard", "kernel": { "display_name": "Python 3", "language": "python", "name": "python3" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.8" } }, "nbformat": 4, "nbformat_minor": 4 }