Regarding the drift and deep isopycnal mixing values.... Are you planning to use isopycnal mixing in these simulations? Submeso can be turned on as it is separate than the iso/GM part.
We're planning on configuring the physics hi-res experiment as the existing JRA hi-res run has been done, except for updating the JRA forcing. This enables us to (mostly) cleanly initialize the physics from the existing run. So I don't think we are considering turning on submeso in the hi-res.
The lo-res companion experiment serves 2 purposes, spinning up near-surface BGC tracers for initializing them in the hi-res experiment, and for comparison to the hi-res experiment. For the lo-res, we are not tied to an existing lo-res configuration. We are considering decreased deep Redi min so that tracer drift is more comparable to the hi-res. There is also experience with this configuration. While we could turn off submeso in the lo-res, in order to be more compatible with the hi-res, I'm less sure about our experience with this configuration with JRA forcing.
Thank you for this explanation. Although I do not disagree fundamentally for the desire to "match" what was done before, I see this also as an impediment to try new things in the high-res simulations. We seem to be following what was done over 5 years ago. Certain missing physics is being propagated and I can see ourselves following similar approaches in the future for comparisons to existing simulations. Not sure when is ever the right time to revisit some of these choices.
@Gokhan Danabasoglu, are you advocating for turning submeso on in the hi-res configuration?
I think we should have submeso and isopycnal mixing for the tracers in the hires configuration. Maybe not in this first BGC run but in the future.
I'm starting to put together a tag for this, based on cesm2_2_alpha04d
. It looks CIME has the JRA configuration updates that @Keith Lindsay brought in to CESM 2.1 [in other words, the CIME team has merged maint-5.6
into master
], but the POP tag is still missing those updates. I'll talk to Alper this afternoon, but I think the best plan forward is to merge the cesm2_1_x_rel
branch onto POP's master
branch, make a new tag, and then start a branch for this run off of that new tag. Does that seem like a reasonable path? Another option is to make a new branch now and cherry-pick Keith's JRA changes onto it, but I think that will complicate the process of merging this branch back onto master
cesm2_2_beta04
has been tagged, and I'm using that as a starting point for the highres_JRA_BGC
branch of POP. So far I've created the following compset
<compset> <!-- latest JRA forcing, ecosys, high-res --> <alias>GIAFECO_JRA_HR</alias> <lname>2000_DATM%JRA-1p4-2018_SLND_CICE_POP2%ECO_DROF%JRA-1p4-2018_SGLC_SWAV</lname> </compset>
and modified build-namelist
so it doesn't look for dust_flux_input%{filename,file_fmt,file_varname}
when dust_flux_source == 'driver'
(similar modification for iron_flux_input
when iron_flux_source == 'driver-derived'
). It looks like I need a file for fesedflux
to continue
ERROR: Command /glade/work/mlevy/codes/CESM/cesm2_2_beta04+GECO_JRA_HR/components/pop/bld/build-namelist failed rc=255 out=build-namelist - ERROR: No default value found for fesedflux_input%filename user defined attributes: err=Died at /glade/work/mlevy/codes/CESM/cesm2_2_beta04+GECO_JRA_HR/components/pop/bld/build-namelist line 2370.
@Matt Long have you created one, or should I start by setting the field to 0?
For now I'm just trying to get a SMS_Ld1.TL319_t13.GIAFECO_JRA_HR.cheyenne_intel.pop-performance_eval
test built and running. Once I've cleared that hurdle, I'll move on to a longer run to get a better feel for the total model cost with output disabled... (note to self: I still need to update the OCN_BGC_CONFIG
variable to turn on the coccolithophores)
And it looks like the plan for tracer initial conditions is "BGC 1° spin-up" -- if someone (@Keith Lindsay ?) can point me to that run, I can map fields to the high-res grid (unless that's been done already? apologies if I missed a post about it in this channel)
I haven't generated one yet, but am close. If this is holding you up, set the field to zero.
This dataset contains a preliminary fesedflux
file for the tx0.1v3
grid:
/glade/work/mclong/cesm_inputdata/fesedflux_total_reduce_oxic_POP_tx0.1v3.c200420.nc
This is preliminary and will certainly change.
The algorithm is generating much lower Fe flux at hi-res than the same code for lo-res. I need to look into this. However, this file should permit functional evaluation of the model.
I have made this file into 64bit-offset format, using this function:
def ncks_fl_fmt64bit(file): """ Converts file to netCDF-3 64bit by calling: ncks --fl_fmt=64bit Parameter --------- file : str The file to convert. """ ncks_cmd = ' '.join(['ncks', '-O', '--fl_fmt=64bit', file, file]) cmd = ' && '.join(['module load nco', ncks_cmd]) p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True) stdout, stderr = p.communicate() if p.returncode != 0: print(stdout.decode('UTF-8')) print(stderr.decode('UTF-8')) raise
In previous attempts to write netCDF3 from xarray, I found it was not possible. Perhaps this has been fixed, but I haven't checked. I believe that the model cannot read netCDF4 on some machines?
@Michael Levy , we don't currently have IC for hi-res BGC, particularly with explicit cocco. I'll put something together today for your testing.
Thanks Keith! No need to include cocco, first run will be without it and I think I still have a script or notebook from when Kristen and I added cocco to the x1 initial condition file
@Matt Long
In previous attempts to write netCDF3 from xarray, I found it was not possible. Perhaps this has been fixed, but I haven't checked. I believe that the model cannot read netCDF4 on some machines?
This is correct, in fact I ran into issues helping put together the current beta tag (that we are using as a starting point) because the current intel compiler on cheyenne causes PIO errrors when reading netCDF4 files. I think I was using
$ ncks -5 file_in.nc file_out.nc
To convert to 64bit_data
(rather than 64bit_offset
) but if I run into issues I'll let you know. I don't expect to.
("errrors" is a portmanteau of "errors" and the sound I made when I realized some files could be read by CESM 2.1 but not 2.2)
A few more updates:
/glade/p/cesmdata/cseg/inputdata/ocn/pop/tx0.1v3/tmp
with a README
file explaining its purposeThis is a temporary directory to house intermediate files while we set up BGC forcings and initial conditions for a high-res JRA cycle. Once we are happy with the results, files will be copied to ic/ or forcing/ as appropriate and this directory will be removed. Nothing in this directory should be imported to the inputdata repository!
I copied the preliminary fesedflux file from /glade/work/mclong/cesm_inputdata/
and will add additional files as they are created. This seemed like a better solution than hacking build-namelist
to avoid adding the /glade/p/cesmdata/cseg/inputdata/
prefix to files.
build-namelist
error isERROR: Command /glade/work/mlevy/codes/CESM/cesm2_2_beta04+GECO_JRA_HR/components/pop/bld/build-namelist failed rc=255 out=build-namelist - ERROR: No default value found for nox_flux_monthly_input%filename
The overview mentions
Atm NOx, NHy: cycle 1850
I assume that still needs to be done.
@Keith Lindsay may already have Ndep fields interpolated.
Yes, Ndep is available at
/glade/p/cgd/oce/people/klindsay/oldcgdoce/BGC_2/ndep/ndep_ocn_1850_w_nhx_emis_tx0.1v3_c191115.nc
Thanks Keith! I copied it to the tmp/
directory I created in inputdata, and that brings us to
ERROR: Command /glade/work/mlevy/codes/CESM/cesm2_2_beta04+GECO_JRA_HR/components/pop/bld/build-namelist failed rc=255 out=build-namelist - ERROR: No default value found for riv_flux_shr_stream_file
and
River nutrients: set to 1900 (same mapping files as runoff)
I don't suppose you have that file floating around? (It looks like you created the x1 version riv_nut.gnews_gnm.JRA025m_to_gx1v7_nn_open_ocean_nnsm_e1000r300_marginal_sea_190214.20190602.nc
and also the map from JRA -> t13 /glade/p/cesmdata/cseg/inputdata/cpl/gridmaps/rJRA025/map_JRA025m_to_tx0.1v3_nnsm_e333r100_190226.nc
)
I do not have the BGC riv flux forcings for tx0.1v3.
I think that @Matt Long was going to do that, and I recall him saying that his riv flux regridding scripts needed some generalizing work.
I didn't create the gx1v7 riv flux file, I copied it to inputdata from /glade/work/mclong/cesm_inputdata/.
I also didn't create the JRA025m -> tx0.1v3 nnsm mapping file. I copied it to inputdata from
/glade/work/altuntas/cesm.input/mapping/JRA025m_tx0.1v3/
The riverine nutrient dataset uses the same mapping files as freshwater, so I will need to ensure that the nutrient data are on the relevant runoff grid. It's on my list.
It looks like the nutrient file on JRA grid is /glade/work/mclong/cesm_inputdata/work/river_nutrients.GNEWS_GNM.JRA55.20190602.nc
and the mapping file for JRA -> 0.1 deg is /glade/p/cesmdata/cseg/inputdata/cpl/gridmaps/rJRA025/map_JRA025m_to_tx0.1v3_nnsm_e333r100_190226.nc
I expect to have a hi-res BGC IC late tonight. It will be obs for macro-nutrients, O2, DIC, and ALK. Everything else is from the CESM2 piControl restart file at 0501-01-01. This is what was done for OMIP IC. I've got a notebook that generates this running now, though it is kinda slow. It's taking ~30 minutes per tracer. The lateral fill after lateral regridding and smoothing of the fill takes nearly 99% of this time.
Here is an obs based IC:
/glade/work/klindsay/cesm_inputdata/ecosys_jan_IC_omip_POP_tx0.1v3_c200423.nc
It does not include explicit coccolithophore tracers.
So it looks like the three files still missing are
riv_flux_shr_stream_file restore_data_filenames_derived restore_inv_tau_input%filename
I know @Matt Long is working on the river flux file, but I don't think I've heard anything about tracer restoring.
Also, I found the first bug from when I cleaned up the buggy merge of maint-5.6
into master
in CIME... I didn't get the JRA025v2 -> tx0.1v3 runoff map file name right at https://github.com/ESMCI/cime/blob/master/config/cesm/config_grids_common.xml#L257 (there should be an _nnsm_
between tx0.1v3
and e333r100
), so I'll set up a CIME branch for this experiment tag as well.
I propose that BGC restoring could be turned off in your initial test runs.
It looks like this could be accomplished by having the following in user_nl_marbl
tracer_restore_vars(1) = "" tracer_restore_vars(2) = "" tracer_restore_vars(3) = "" tracer_restore_vars(4) = "" tracer_restore_vars(5) = ""
(I'm not sure if these lines can be replaced with tracer_restore_vars = ""
.)
I think you could then put restore_data_filenames_derived='none'
into user_nl_pop.
I suggest modifying build-namelist to only call add_default
for restore_inv_tau_input%*
nml vars if restore_inv_tau_opt= file_time_invariant
, similar to what we do for ciso_atm_d14c_filename
only getting written if ciso_atm_opt=file
. Then you wouldn't need to specify restore_inv_tau_input%filename
, as the default for restore_inv_tau_opt
is const
. (I'm being sloppy with exact build-namelist syntax here.)
I think you can run without BGC river fluxes by adding riv_flux_shr_stream_file='unknown'
to user_nl_pop.
This would be a workaround until that forcing is available.
Good thoughts, Keith! What I've done is alter build-namelist
to look at the length of restorable_tracer_names(1)
and only add restore_data_filenames_derived
to the namelist if the length is non-zero. In that same block of logic, the value of restore_inv_tau_opt
determines whether restore_inv_tau_const
or restore_inv_tau_input%[*]
gets added to well (with nothing added if no tracers are being restored).
This is definitely a good short-term fix for getting an initial test up and running, but I need to think about it a little more before determining whether it should stay in the code long-term or get pulled out. The big question mark in my mind is how POP's restorable_tracer_names
interacts with MARBL'stracer_restore_vars
. I think they are independent, but POP will throw an error in initialization if MARBL lists a tracer in tracer_restore_vars
that is not in POP's restorable_tracer_names
... in which case I think it's okay to only include restore_data_filenames_derived
, restore_inv_tau_opt
, and either restore_inv_tau_const
or restore_inv_tau_input%[*]
in pop_in if restorable_tracer_names
contains at least 1 tracer.
Also, it's worth noting that the MARBL default for tracer_restore_vars
is also empty unless we're running on the x1
or x3
; so at least for now no mods are required in user_nl_marbl
I think things are ready to go when cheyenne is returned to service. The plan is to create a new case (G1850ECOIAF_JRA_HR
compset, TL319_t13
resolution) with the following mods:
Some variables that will be used in multiple places
ref_dir=/glade/work/mlevy/hi-res_BGC_JRA/restarts_from_alper ref_case=g.e20.G.TL319_t13.control.001_contd ref_date=0245-01-01
XML changes
./xmlchange NTASKS_OCN=7507 ./xmlchange STOP_N=3,STOP_OPTION=nmonths ./xmlchange JOB_WALLCLOCK_TIME=10:00:00,PROJECT=CESM0010 ./xmlchange OCN_CHL_TYPE=prognostic ./xmlchange OCN_BGC_CONFIG=latest+cocco ./xmlchange RUN_TYPE=hybrid,RUN_REFCASE=${ref_case},RUN_REFDATE=${ref_date} ./xmlchange OCN_TRACER_MODULES=ecosys ./xmlchange -a CICE_CONFIG_OPTS="-trage 0"
Namelist changes
cat >> user_nl_pop << EOF ! tavg namelist changes ltavg_ignore_extra_streams = .true. tavg_freq = 1 1 1 5 tavg_freq_opt = 'nmonth' 'nday' 'nyear' 'nday' tavg_file_freq = 1 1 1 5 tavg_file_freq_opt = 'nmonth' 'nmonth' 'nyear' 'nday' tavg_stream_filestrings = 'nmonth1' 'nday1' 'nyear1' 'nday5' ! Pick correct override file ! ---- ! 1958 - 1995 [245-282]: tavg_contents_override_file = '/glade/work/mlevy/hi-res_BGC_JRA/tx0.1v3_tavg_contents_no5day' n_tavg_streams = 3 ! 1996 - 2018 [283-305]: ! tavg_contents_override_file = '/glade/work/mlevy/hi-res_BGC_JRA/tx0.1v3_tavg_contents' ! n_tavg_streams = 4 ! Needed to get all MARBL diags defined correctly lecosys_tavg_alt_co2 = .true. ! First run needs initial conditions for ecosys init_ecosys_option='ccsm_startup' init_ecosys_init_file = '/glade/p/cesmdata/cseg/inputdata/ocn/pop/tx0.1v3/ic/ecosys_jan_IC_omip_POP_tx0.1v3_c200617.nc' EOF cat >> user_nl_cice << EOF ndtd=2 r_snw=1.00 f_blkmask = .true. EOF
SourceMods
cp /glade/work/mlevy/hi-res_BGC_JRA/SourceMods/marbl_interior_tendency_mod.F90 SourceMods/src.pop/
The source mod is just a small change to decrease the volume of warning messages written to cesm.log
:
$ diff /glade/work/mlevy/codes/CESM/cesm2_2_alpha06a+pop_rel_candidates/components/pop/externals/MARBL/src/marbl_interior_tendency_mod.F90 /glade/work/mlevy/hi-res_BGC_JRA/SourceMods/marbl_interior_tendency_mod.F90 2502c2502 < if (domain%delta_z(k) * DOP_loss_P_bal .gt. Jint_Ptot_thres) then --- > if (domain%delta_z(k) * DOP_loss_P_bal .gt. 1e4_r8*Jint_Ptot_thres) then
I'll just need to copy ${ref_dir}/${ref_date}-00000/*
to the run directory.
I'm not entirely clear on what is required for the ALT_CO2
tracers, but conversation in the ALT_CO2 topic makes it seem like we won't be doing anything different for them in the first few years of the run anyway.
@Michael Levy: does this include the advection scheme settings we discussed?
@Matt Long it does -- I talked to @Keith Lindsay about it and details about the implementation are at https://github.com/ESCOMP/POP2-CESM/pull/34#issuecomment-659633147
@Keith Lindsay and @Matt Long : I've created a case at /glade/work/mlevy/hi-res_BGC_JRA/cases/test_case_setup_script
if you could look at the namelists and make sure everything looks okay. For comparison, pop_in
and ice_in
from Alper's high-res runs are in /glade/scratch/altuntas/temp/g.e20.G.TL319_t13.control.001_hfreq/CaseDocs
One set of changes that I didn't include in my list from earlier today (but have made in my sandbox); Alper had a few mods in user_nl_pop
that I had missed:
! other changes from Alper (g.e20.G.TL319_t13.control.001_hfreq) lcvmix = .false. h_upper = 20.0 ltidal_mixing = .true.
I'd like to get the run going as soon as cheyenne is back up, so if anything looks off I'll fix it now
@Michael Levy , here are some differences between Alper's env_run.xml and your.
'<' is Alper's case, '>' is yours.
< <entry id="ATM2OCN_FMAPNAME" value="/glade/work/fredc/cesm/mapping_JRA55/map_TL319_TO_tx0.1v3_patc_blin_merged.180705.nc"> > <entry id="ATM2OCN_FMAPNAME" value="cpl/gridmaps/TL319/map_TL319_TO_tx0.1v3_patc.170730.nc"> < <entry id="ATM2OCN_SMAPNAME" value="/glade/work/fredc/cesm/mapping_JRA55/map_TL319_TO_tx0.1v3_patc_blin_merged.180705.nc"> > <entry id="ATM2OCN_SMAPNAME" value="cpl/gridmaps/TL319/map_TL319_TO_tx0.1v3_patc.170730.nc"> < <entry id="ATM2OCN_VMAPNAME" value="/glade/work/fredc/cesm/mapping_JRA55/map_TL319_TO_tx0.1v3_patc_blin_merged.180705.nc"> > <entry id="ATM2OCN_VMAPNAME" value="cpl/gridmaps/TL319/map_TL319_TO_tx0.1v3_patc.170730.nc"> < <entry id="ROF2OCN_LIQ_RMAPNAME" value="cpl/gridmaps/rJRA025/map_JRA025m_to_tx0.1v3_e333r100_170830.nc"> > <entry id="ROF2OCN_LIQ_RMAPNAME" value="cpl/gridmaps/rJRA025/map_JRA025m_to_tx0.1v3_nnsm_e333r100_190226.nc"> < <entry id="ROF2OCN_ICE_RMAPNAME" value="cpl/gridmaps/rJRA025/map_JRA025m_to_tx0.1v3_e333r100_170830.nc"> > <entry id="ROF2OCN_ICE_RMAPNAME" value="cpl/gridmaps/rJRA025/map_JRA025m_to_tx0.1v3_nnsm_e333r100_190226.nc"> < <entry id="CPL_SEQ_OPTION" value="RASM_OPTION1"> > <entry id="CPL_SEQ_OPTION" value="CESM1_MOD">
I suspect that the ROF2OCN changes are from the updated JRA drof definition, so we want your values.
But we might want Alper's values for the ATM2OCN and CPL_SEQ_OPTION entries.
There are xmlchange commands in his CaseStatus for these variables.
Maybe we should seek Alper's input on why he made these changes from whatever the defaults were for his case.
I'll send him an email and ask (I can cc you if you'd like)
yes please
I wish I had noticed this earlier... if we do use the updated ATM2OCN
mapping file, I'd have updated config/cesm/config_grids_mct.xml
in CIME to get it by default (maybe I can still sneak that update in). The mapping file was copied from Fred's work directory to /glade/p/cesmdata/cseg/inputdata/cpl/gridmaps/TL319/map_TL319_TO_tx0.1v3_patc_blin_merged.180705.nc
, so at least I can use the version in inputdata
Alper confirmed that we should use the updated maps. He didn't recall the reasoning behind changing CPL_SEQ_OPTION
but for consistency I think we should match his runs. To that end, /glade/work/mlevy/hi-res_BGC_JRA/cases/test_case_setup_script.002
was built from a sandbox with cime mods (https://github.com/ESMCI/cime/pull/3624) for the ATM2OCN
maps and an xmlchange
command for CPL_SEQ_OPTION
@Michael Levy , I've think that I've got my notebook for regridded from g17 to t13 ready. Can you please point me to the low-res 2-cycle JRA55 case of @Kristen Krumhardt that has the same marbl settings that are being used in the hi-res case? Thanks.
@Keith Lindsay I believe it's the output in /glade/scratch/kristenk/archive/g.e22b05.G1850ECOIAF_JRA.TL319_g17
.cocco.001
, though I'd like @Kristen Krumhardt to verify
Yes, Mike has the right path
Thanks for confirming this.
@Michael Levy, the new obs based IC is
/glade/scratch/klindsay/cesm_inputdata/ecosys_jan_IC_omip_plus_g.e22b05.G1850ECOIAF_JRA.TL319_g17.cocco.001_0123-01-01_POP_tx0.1v3_c200720.nc
I think this should be ready to go, with no additional processing.
@Michael Levy , the IC based just on Kristen's case is
/glade/scratch/klindsay/cesm_inputdata/ecosys_jan_IC_g.e22b05.G1850ECOIAF_JRA.TL319_g17.cocco.001_0123-01-01_POP_tx0.1v3_c200720.nc
@Keith Lindsay -- thanks! I copied the file to /glade/p/cesmdata/cseg/inputdata/ocn/pop/tx0.1v3/tmp/ecosys_jan_IC_omip_plus_g.e22b05.G1850ECOIAF_JRA.TL319_g17.cocco.001_0123-01-01_POP_tx0.1v3_c200720.nc
and am back to waiting for my job to start... hopefully it'll go overnight, it's been in the queue for almost 3 hours already.
oh, wait... your second link is a different file. I'll get that job queued up as well
The job with the WOA / Kristen combined initial conditions is queued, and I'll submit the "Kristen IC w/ out WOA" ASAP
Last updated: May 16 2025 at 17:14 UTC