Data Access¶
- This notebook illustrates how to make diagnostic plots using the CONUS 404 dataset hosted on NCAR’s Geoscience Data Exchange (GDEX).
- https://
gdex .ucar .edu /datasets /d559000/ - This data is open access and can be accessed via 3 protocols
- posix (if you have access to NCAR’s HPC systems: Casper or Derecho)
- HTTPS
- OSDF using intake-ESM catalogs.
- Learn about intake-ESM catalogs: https://
intake -esm .readthedocs .io /en /stable/
# Imports
import intake
import numpy as np
import pandas as pd
import xarray as xr
import seaborn as sns
import matplotlib.pyplot as plt
import os# import fsspec.implementations.http as fshttp
# from pelicanfs.core import PelicanFileSystem, PelicanMap, OSDFFileSystem import dask
from dask_jobqueue import PBSCluster
from dask.distributed import Client# Catalog URLs
cat_url = 'https://osdf-data.gdex.ucar.edu/ncar/gdex/d559000/catalogs/d559000_catalog.json'
# cat_url = 'https://osdf-data.gdex.ucar.edu/ncar/gdex/d559000/catalogs/d559000_catalog-http.json' # HTTPS access
# cat_url = 'https://osdf-data.gdex.ucar.edu/ncar/gdex/d559000/catalogs/d559000_catalog-osdf.json' #OSDF access
print(cat_url)https://osdf-data.gdex.ucar.edu/ncar/gdex/d559000/catalogs/d559000_catalog.json
# Set up your scratch folder path
username = os.environ["USER"]
glade_scratch = "/glade/derecho/scratch/" + username
print(glade_scratch)/glade/derecho/scratch/harshah
Create a PBS cluster¶
# Create a PBS cluster object
cluster = PBSCluster(
job_name = 'dask-wk25-hpc',
cores = 1,
memory = '8GiB',
processes = 1,
local_directory = glade_scratch+'/dask/spill/',
log_directory = glade_scratch + '/dask/logs/',
resource_spec = 'select=1:ncpus=1:mem=8GB',
queue = 'casper',
walltime = '5:00:00',
#interface = 'ib0'
interface = 'ext'
)# Scale the cluster and display cluster dashboard URL
n_workers = 5
client = Client(cluster)
cluster.scale(n_workers)
client.wait_for_workers(n_workers = n_workers)
clusterLoading...
Load CONUS 404 data from GDEX using an intake catalog¶
col = intake.open_esm_datastore(cat_url)
colLoading...
- col.df turns the catalog object into a pandas dataframe!
- (Actually, it accesses the dataframe attribute of the catalog)
col.dfLoading...
Select data and plot¶
What if you don’t know the variable names ?¶
- Use pandas logic to print out the short_name and long_name
col.df[['variable','long_name']]Loading...
- We notice that long_name is not available for some variables like ‘V’
- In such cases, please look at the wrfout_datadictionary file on this page https://
gdex .ucar .edu /datasets /d559000 /documentation /#
Temperature¶
- Plot temperature for a random date
cat_temp = col.search(variable='T2')
cat_temp.df.head()Loading...
- The data is organized in (virtual) zarr stores with one water year’s worth of data in one file
- Select a year. This is done by selcting the start time to be Oct 1 of that year or the end time to be Sep 30 of the same year
- This also means that if you want to request data for other days, say Jan 1 for the year YYYY, you first have to load the data for one year i.e., YYYY and then select the data for that particular day. This example is discussed below.
date = "2020-10-01"
# year = "2021"
cat_temp_subset = cat_temp.search(start_time = date)
cat_temp_subsetLoading...
Load data into xarray¶
# Load catalog entries for subset into a dictionary of xarray datasets, and open the first one.
dsets = cat_temp_subset.to_dataset_dict(zarr_kwargs={"consolidated": True})
print(f"\nDataset dictionary keys:\n {dsets.keys()}")Loading...
# Load the first dataset and display a summary.
dataset_key = list(dsets.keys())[0]
# store_name = dataset_key + ".zarr"
print(dsets.keys())
ds = dsets[dataset_key]
ds = ds.T2
ds%%time
desired_time = "2021-01-01T00"
ds.sel(Time=desired_time,method='nearest').plot(cmap='inferno')cluster.close()