I'm going to use this topic to track my progress with the MARBL driver. Current status:
cesm2_2_alpha04c
pkg/MARBL
via manage_externals
-D_USE_MARBL_TRACERS
to enable calls to MARBLUSE_MARBL_TRACERS = True
and the model was NOT built with -D_USE_MARBL_TRACERS
then runtime error-D_USE_MARBL_TRACERS
(will change down the line, but currently adds 20 or 30 seconds to 5+ minute build)Current question (for @Andrew Shao and / or @Gustavo M Marques ?) -- is there an interface to post_data()
where I can send a single column of a 3D field (or a single point of a 2D field) at a time, or do I need to copy each column into a buffer and then post the data once every column has been computed? The issue is that MARBL will overwrite it's diagnostic datatype from the previous column when it moves on to the next column.
@Michael Levy I am not sure what you are trying to do exactly but we can configure post_data()
to send a single column of a 3D field. The code below could be an example:
CS%id_carbon = register_diag_field('ocean_model', 'carbon', CS%diag% axesZL, CS%Time, & 'carbon concentration', 'units') ! Note: if axesZL (a 1-D z-space axis at layer centers) does not work, try axesNull integer :: id_carbon = -1 if (CS%id_carbon > 0) call post_data(CS%id_carbon, carbon(i,j,:), CS%diag)
Please feel free to call me on Google Meet if you want talk about it.
@Gustavo M Marques -- I'll call you in a few minutes, just wrapping something up. I don't think I was very clear -- I want to register a full 3D field (so diag%axesTL
), but post the data one column at a time. Basically, the MARBL driver is going to be computing a single column at a time, and I'd like to post the diagnostics immediately after computing rather than copying them to a buffer and then posting the entire buffer once every column has been computed
@Keith Lindsay, @Gustavo M Marques, and @Andrew Shao : I'm running into an issue that might be purely MARBL, or might be MARBL reacting to something MOM is doing... if any of you have thoughts on how to proceed, I'd appreciate it.
Background: I've been testing the MARBL driver by running for a month -- this has let me add output to the monthly history file for investigation, and the runs are taking ~20 minutes so it's not a terribly long wait to look at results. I recently added in a call to MARBL's surface_flux_compute()
routine, and in /glade/work/mlevy/codes/CESM/cesm2_2_alpha04c+mom6_marbl/cases/C1850MOMECO.008
I was able to run for a month setting tracer values at the surface to 0 and all surface forcing inputs to 1.
Issue: In /glade/work/mlevy/codes/CESM/cesm2_2_alpha04c+mom6_marbl/cases/C1850MOMECO.009
, I set tracer values at the surface to the actual tracer values MOM is passively advecting around (marbl_instances%tracers_at_surface(1,m) = CS%tr(i,j,1,m)
) and now the run crashes during day 7 of the run (at least the coupler gets to midnight on Jan 7, but not midnight on Jan 8). After talking with Gustavo yesterday, I ran with DEBUG=TRUE
in /glade/work/mlevy/codes/CESM/cesm2_2_alpha04c+mom6_marbl/cases/C1850MOMECO.009_debug/
, but that run seems to pick up unrelated issues (the surface forcing inputs are all 1 instead of 0 because the debug run was flagging some divide-by-zeros that I think came from having atmospheric pressure = 0)
Anyway, I'm not sure what to make of these runs crashing and I'm having trouble finding any useful information in the logs. Keith, is it possible this relates to me sending non-physical values for the surface forcing fields? Andrew or Gustavo -- is there something special happening to the tracers on day 7 that might hint at why the run is doing fine up until that point?
I think maybe the next step is to run for 5 days, though I'm not familiar enough with MOM to know how to wholesale change the output frequency of mom6.hm
from monthly to more frequent.
@Michael Levy , I wouldn't be surprised if provided all surface forcing inputs to 1 causes problems.
I suggest working on the plumbing for forcing inputs, and diagnostic output that of the forcing fields being sent to MARBL, before proceeding with invoking the surface flux computation routine. I can easily see debugging 009 taking a long time. That might not be a good use of time if it boils down to garbage in, garbage out.
I can see the utility in having checks in MARBL for reasonable forcing inputs. But I don't think that test dev runs in MOM dev is a good platform for coming up with plausible bounds for these checks. Perhaps a time-evolving single column model of MARBL, when it is available, would be a good tool to explore that, as a completely different project.
@Michael Levy, let me know if you would like discuss this via telecom. I can help you change the output frequency so we can see how the added tracers look like.
it's been almost a month since the last update, and I wish I had more to say... but I inadvertently "fixed" the issue with the surface tracers. I say inadvertently because I'm not sure exactly what fixed it: I updated diag_table
, first because I wanted to separate the BGC variables to get more frequent output and then a second time because I realized I was accidentally duplicating a key in the YAML file that generated the table, populated T
and S
from MOM's state, and removed some un-necessary computation I had copied from the dye_tracer
module; the combination of the last two seems to have done the trick. So for surface forcings I'm using
if (CS%u10_sqr_ind > 0) marbl_instances%surface_flux_forcings(CS%u10_sqr_ind)%field_0d(1) = 2.5e5 if (CS%sss_ind > 0) marbl_instances%surface_flux_forcings(CS%sss_ind)%field_0d(1) = tv%S(i,j,1) if (CS%sst_ind > 0) marbl_instances%surface_flux_forcings(CS%sst_ind)%field_0d(1) = tv%T(i,j,1) if (CS%ifrac_ind > 0) marbl_instances%surface_flux_forcings(CS%ifrac_ind)%field_0d(1) = 0 if (CS%dust_dep_ind > 0) marbl_instances%surface_flux_forcings(CS%dust_dep_ind)%field_0d(1) = 0 if (CS%fe_dep_ind > 0) marbl_instances%surface_flux_forcings(CS%fe_dep_ind)%field_0d(1) = 0 if (CS%nox_flux_ind > 0) marbl_instances%surface_flux_forcings(CS%nox_flux_ind)%field_0d(1) = 0 if (CS%nhy_flux_ind > 0) marbl_instances%surface_flux_forcings(CS%nhy_flux_ind)%field_0d(1) = 0 if (CS%atmpress_ind > 0) marbl_instances%surface_flux_forcings(CS%atmpress_ind)%field_0d(1) = 1 if (CS%xco2_ind > 0) marbl_instances%surface_flux_forcings(CS%xco2_ind)%field_0d(1) = 284.7 if (CS%xco2_alt_ind > 0) marbl_instances%surface_flux_forcings(CS%xco2_alt_ind)%field_0d(1) = 284.7
and for tracer surface values:
marbl_instances%tracers_at_surface(1,m) = CS%tr(i,j,1,m)
Most of the forcing fields need to be updated to either read from files or be passed through the coupler, but at least it's running with reasonable inputs. I think my next two steps are (1) merge in the latest master [I'm hoping that putting all the ecosys diagnostics in a separate stream prevents conflicts in the interface layer] and then (2) setting up memory for saved state. At that point, I'll see about packing multiple columns of data into a single call to surface_flux_compute()
.
@Michael Levy Just FYI, GFDL is experimenting with a new way of supporting external code that might be useful for bringing MARBL
https://github.com/NOAA-GFDL/MOM6/pull/1133
Ahh sorry, I completely missed this thread (forgot to reload the Zulip app). Glad to hear that is seems to have been resolved. With regard to the GFDL PR, it's basically Alistair going back to the idea of dummy interfaces for all external (including tracer) packages.
FYI, I poked around the MOM_OCMIP2_CFC.F90
code a bit and it does appear that cfc fluxes are computed in the FMS coupler, not in MOM6. So they don't end up needing to plumb state variables from the atmosphere and sea ice models down into MOM_OCMIP2_CFC.F90
, like we are planning to do for MARBL computed surface fluxes.
@Keith Lindsay I'm trying to test my code that adds surface flux saved state to the restart file, but having trouble getting expected test failures. Basically, I want to add a variable to the history file, run an ERS test without adding saved state to the restart, and fail the COMPARE_base_rest
stage. Then I want to write fields to restart and see that stage of the test pass. Surface PH seemed like a good candidate, since the fields in the restart file are PH_SURF
and PH_SURF_ALT_CO2
, but even with the PH
diagnostic in history file I'm seeing the COMPARE_base_rest
pass regardless of whether I write anything into the restart file. Is there a better diagnostic variable to see the effect of saving saved state?
It seems a little tricky because MOM isn't applying any of the intent(out)
variables from MARBL to the tracer state... so I need a value that MARBL outputs as a diagnostic (in POP, I think keeping saved state out of the restarts feeds back to tracer values pretty quickly and then the list of variables that fail the COMPARE_base_rest
is large)
lets talk on zoom
Update: I've added STF
to the tracer_vertdiff()
call via the optional sfc_flux
argument; now I'm getting failures in the COMPARE_base_rest
for PH
whether I add saved state to the restart file or not... I suspect that I am writing the correct values to the restart file but not reading them in upon restart. I'll look a little more this afternoon then add it to the list of things to ask @Andrew Shao about if it remains an issue
It turned out my restart issues were due to registering restart fields too late in the initialization process; I've filed and issue ticket with GFDL asking for an error message in this situation, as I only figured it out because I wondered "what if I register the saved state restarts earlier in the code". My branch now passes ERS tests, hooray!
I'm also putting the finishing touches on getting dust flux and iron flux from the coupler -- once this is done, the to-do list for computing surface fluxes will be pretty short:
dust_ratio_thres
so MOM uses the fe_bioavail_frac_offset
parameter everywhereI'd like to read ndep from a file, but the rest of the tasks can wait until after the driver is calling interior_tendency_compute()
@Andrew Shao (or @Gustavo M Marques ) -- MARBL needs to know pressure in each column; POP has a function in state_mod.F90
that converts from depth (m) -> pressure (bars) using the Levitus 1994 formula:
!----------------------------------------------------------------------- ! ! convert depth in meters to pressure in bars ! !----------------------------------------------------------------------- pressure = 0.059808_r8*(exp(-0.025_r8*depth) - c1) & + 0.100766_r8*depth + 2.28405e-7_r8*depth**2
Does MOM have something similar? Or is pressure a state variable that is available on a derived type?
Pressure will have to be calculated within the MARBL driver. For consistency with everything else it's essentially:
P(1) = h(1)*GV%h_kg_m2 do k=2,nk P(k) = P(k-1) + h*GV%h_kg_m2 enddo
That expression is valid for both Boussinesq (where depth is the 'correct' vertical, Cartesian coordinate) and non_Boussinesq modes (where pressure is the 'correct' one)
Cool, I'll add the function in MARBL_tracers.F90
; as a temporary solution I just computed it in MARBL_tracers_column_physics()
:
! TODO: In POP, pressure comes from a function in state_mod.F90; I don't see a similar function here ! This formulation is from Levitus 1994, and I think it belongs in MOM_EOS.F90? ! Converts depth [m] -> pressure [bars] if (CS%pressure_ind > 0) marbl_instances%interior_tendency_forcings(CS%pressure_ind)%field_1d(1,:) = & 0.0598088*(exp(-0.025*zc(:)) - 1) + 0.100766*zc(:) + 2.28405e-7*(zc(:)**2)
Within a layer, pressure will vary linearly in both cases
Looking in src/core/MOM_verticalGrid.F90
, I would have guessed we should be converting with H_to_Pa
, and then convert from Pa
to bars
. Am I misunderstanding?
Also, it looks like your code computes pressure at the bottom each model layer.
If so, and assuming you meant H_to_Pa
, would the following be appropriate for layer-averaged pressure?
P_layer_bot = h(1)*GV%H_to_Pa P_layer_avg(1) = 0.5 * P_layer_bot do k=2,nk P_layer_top = P_layer_bot P_layer_bot = P_layer_top + h(k)*GV%H_to_Pa P_layer_avg(k) = 0.5 * (P_layer_top + P_layer_bot) enddo
Nice catch Keith, yes it should be H_to_Pa
and yes your code is what you would want if you wanted the layer-averaged pressure
(I dunno, why my brain thought it was H_to_kg_m2; i must have been staring at another block and went on autopilot :))
great, thanks guys! I'll update that computation next; I was just putting in the workaround we talked about where I'll tell MARBL kmt
is the bottom-most thick layer to avoid accumulating sediment in a vanishing layer. (Prior to putting the tweak in, the model was crashing with co2calc errors pretty much immediately)
It looked like the vanishing layers were all 1 mm thick, so I just put in the following kludge that ignores any layers on the bottom less than 1 cm thick:
! TODO: better way of handling vanishing layers ! (this is a z* workaround for layers that have vanished at bottom of ocean) if ((.not. set_kmt) .and. (dz(k) > 0.01)) then set_kmt = .true. marbl_instances%domain%kmt = k end if
that line appears in a loop do k = GV%ke, 1, -1
, which is handy for figuring out the smallest k
I can ignore
Do you know which of the tracers was causing co2calc
to blowup?
I think it was dic
or dic_alt_co2
, but I'm not 100% sure. I'm adding some location information to the log, and then I'll copy / paste the error (which is triggered on day 5 or 6 even with the KMT kludge)
So this is the error message MARBL is reporting:
652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_interior_tendency_mod:compute_carbonate_chemistry): Warning reported from marbl_co2calc_interior() with dic_alt_co2 652:WARNING from PE 580: (Task 580) Message from (lon, lat) (-233.351, 29.892). Level: 18 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) it = 1 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x1,f = 0.1826861E-006-0.5838217E-003 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x2,f = 0.4588866E-006-0.8693591E-003 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) it = 2 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x1,f = 0.1152671E-006-0.4909484E-003 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x2,f = 0.7272863E-006-0.1065694E-002 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) it = 3 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x1,f = 0.4588866E-007-0.3628323E-003 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x2,f = 0.1826861E-005-0.1507087E-002 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) it = 4 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x1,f = 0.7272863E-008-0.6614273E-004 652:WARNING from PE 580: (Task 580) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x2,f = 0.1152671E-004-0.2108912E-002 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): bounding bracket for pH solution not found 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) dic = 0.1988476E+004 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) ta = 0.2342346E+004 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) pt = 0.2246129E+000 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) sit = 0.4285969E+001 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) temp = 0.1747161E+002 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) salt = 0.3450777E+002 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:comp_htotal): Error reported from drtsafe 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_co2calc_mod:marbl_co2calc_interior): Error reported from comp_htotal() 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_interior_tendency_mod:compute_carbonate_chemistry): Error reported from marbl_co2calc_interior() with dic 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_interior_tendency_mod:marbl_interior_tendency_compute): Error reported from compute_carbonate_chemistry() 652:WARNING from PE 580: (Task 580) MARBL ERROR (marbl_interface:interior_tendency_compute): Error reported from marbl_interior_tendency_compute() 652:WARNING from PE 580: (Task 580) MARBL ERROR (MARBL_tracers_column_physics): Error reported from marbl_instances%interior_tendency_compute() 652:WARNING from PE 580: ERROR reported from MARBL library
DIC
and DIC_ALT_CO2
are identical, so previous levels in the column are reporting the same warnings about not finding a root in earlier iterations (but they do eventually converge).
in POP, these errors typically indicate a CFL violation. Given where the development of the MOM driver is, I'm guessing the range of potential issues is significantly larger :) @Keith Lindsay do the dic
, ta
, etc values printed out above hint at any potential forcing issues?
The values don't strike me as being crazy. Could you try running with max_bracket_grow_it = 4
in MARBL's marbl_co2calc_mod.F90
?
increasing max_bracket_grow_it
from 3 to 4 let me run a full month. Does this need to be a permanent change in MARBL? Looking through the changes I've made in the MOM driver, the only real issue I can see is that I haven't read in FESEDFLUX yet.
I do suggest increasing max_bracket_grow_it
in MARBL. There have been a few cases where increasing it has enabled some users to run.
@Matt Long for FESEDFLUX
, the MOM scrip grid file is /glade/p/cesmdata/cseg/mapping/grids/tx0.66v1_SCRIP_190314.nc
. The tracer initial conditions file I showed you is /glade/work/mlevy/cesm_inputdata/ecosys_jan_IC_omip_MOM_tx0.66v1_c190925.nc
and the vertical dimension / coordinate in that file is named DEPTH
Thanks @Michael Levy. I am looking into this. First thing I realized is that I had pulled out the Fe sedflux forcing generation into it's own repo:
https://github.com/marbl-ecosys/forcing-Fe-sedflux
@Michael Levy, I have created two temporary FESEDFLUX
forcing files for you:
Note that there are 2 files; both include the FESEDFLUXIN
variable.
In file (1) there are two additional variables:
FESEDFLUXIN = FESEDFLUXIN_oxic + FESEDFLUXIN_reduce
At present, POP simply reads FESEDFLUXIN
and ignores the other variables.
In the future, we may be interested in simulating Fe isotopes and these sources have different signatures, so there may be interest in separating. Please consider this as you build the infrastructure to incorporate this forcing—though the highest priority remains figuring out how to apply this z-level forcing in the ALE context.
There are several things I would like to do to clean up the workflow generating this forcing. In lieu of tackling that full workflow now, I have simply regridded the POP_gx1v7
forcing to the tx0.66v1
grid and to a depth coordinate with 112 vertical levels (extending to 6500 m).
I think this forcing dataset should be fine for testing and development. Let me know what issues you encounter.
We will ultimately need to revisit and address the workflow issues I referenced above.
Thanks @Matt Long! So to start out I want to read FESEDFLUXIN
from both files, convert from umol / m^2 / d
-> nmol / cm^2 / s
, and then sum the two fields to provide a single forcing field... but in the future we might read FESEDFLUXIN_oxic
and FESEDFLUXIN_reduce
separately so it'll be good to have those fields in the forcing file?
(It looks like POP modifies the fevent file to add all values below the ocean floor to the KMT level; I'm not entirely clear on how that would map to z* but initially I'll just focus on getting the fields read and remapped)
Convert from
umol / m^2 / d
->nmol / cm^2 / s
In POP we have the following in the namelist
fesedflux_input%scale_factor = 1.1574e-6
which is the conversion to umol / m^2 / d
-> nmol / cm^2 / s
.
...and then sum the two fields to provide a single forcing field... but in the future we might read
FESEDFLUXIN_oxic
andFESEDFLUXIN_reduce
separately so it'll be good to have those fields in the forcing file?
The total is already in the file, so if expedient, just read that. Otherwise reading separately and summing is good.
@Michael Levy, btw this is the notebook that created those files:
https://github.com/marbl-ecosys/forcing-Fe-sedflux/blob/mom6-dev/notebooks/preliminary_MOM6_forcing.ipynb
FYI, there is a PR in the NOAA-GFDL/MOM6 repo that, among other things, changes the API of tracer_Z_init
. I'm not sure when the PR is projected to be merged. It looks like it will be straightforward to update the MARBL driver when that update eventually gets to the NCAR MOM6 repo.
It might be prudent to follow that NOAA-GFDL/MOM6 repo in order to be aware of MOM6 changes in the works.
@Andrew Shao and I were code up a rough implementation of mapping FESEDFLUX from the WOA z-grid to the MOM grid; we had a couple of mis-steps along the way, and I needed to bump max_bracket_grow_it
from 4 to 5, but I now have a MOM history file where the FESEDFLUX
field has some non-zero values: /glade/scratch/mlevy/C1850MOMECO.041/run/C1850MOMECO.041.mom6.hm_bgc_0001_01.nc
For the conservation property, I need to verify that the column sum on the MOM grid matches the column sum in the obs right?
Yes, those are the correct quantities to compare to check conservation.
Regarding reading time-varying ndep from a file, CDEPS (yet another repo to follow) has a PR to pass ndep from datm. If that PR gets addressed sooner rather than later, then the ability to read time-varying ndep from a file could have its priority lowered.
In the mean time, it might make sense to read ndep from an annual mean file instead of effectively using a perpetual January mean.
Just to follow up on reading nitrogen deposition from a climatology -- I followed @Gustavo M Marques's advice, and used time_interp_external()
to read the data. I needed to clean up the forcing file a little bit. Some was trivial, like adding the axis
attribute to coordinate variables, but I also needed to use 1D arrays for LAT
and LON
instead of the 2D arrays that come from the displaced pole grid. I'm not sure of the best way to verify that the deposition fields are being interpolated correctly, but maybe on Monday I'll run a full year and make sure NHx_FLUX
and NOy_FLUX
vary from month to month in a similar manner to the forcing fields.
I think this was the last hurdle to actually running MARBL + MOM with the default MOM tunings; next week I'll start work on making it easier to control the BGC output. Once we can get more output from MARBL into netCDF files, does it make sense to look at a multi-year C or G case to verify the model can run that long and the output looks reasonable? I can launch a run and then work on getting user_nl_marbl
set so we can try to tune the model.
Following up on the current implementation to read NDEP forcing (via time_interp_external()
): I ran for three years in /glade/scratch/mlevy/C1850MOMECO.044/run
and have confirmed that
NOx_FLUX
changes from month to month within a given year (e.g. 0001-01
is different than 0001-08
)NOx_FLUX
does NOT change from year to year for a given month (e.g. 0001-03
is identical to 0003-03
). The exception is that there is a small (O(1e-10)) difference between 0001-01
and 0002-01
, while 0003-01
is identical to the latter. I assume that MOM6 starts on the second coupling interval in CESM? So years 0002 and 0003 include the 12:00a - 1:00a window on January first while the first year starts at 1:00a?My conclusion from the above is that time_interp_external()
is reading climatological data exactly the way we expect it to
@Michael Levy, here are two riverine nutrient forcing data files for MOM
In this directory:
/glade/work/mclong/cesm_inputdata
Awesome, thanks! I should be able to test them out tomorrow and let you know how things look
Let me know if you see any anomalies. For the time-being, I included a field called TAREA
as my code is setup to integrate using that variable (in cm^2). Would be good to fix this—as well as the fake 1D coordinate issue we discussed.
@Matt Long the file you created includes LAT
and LON
dimensions, but also has nlat
and nlon
(used by the variables we want to read). With some selective editing:
$ ncdump -h /glade/work/mclong/cesm_inputdata/riv_nut.gnews_gnm.rx1_to_tx0.66v1_nnsm_e1000r300_190315.20210401.nc netcdf riv_nut.gnews_gnm.rx1_to_tx0.66v1_nnsm_e1000r300_190315.20210401 { dimensions: LAT = 458 ; LON = 540 ; time = UNLIMITED ; // (21 currently) nlat = 458 ; nlon = 540 ; variables: double LAT(LAT) ; LAT:_FillValue = NaN ; LAT:long_name = "latitude" ; LAT:units = "degrees_north" ; LAT:axis = "Y" ; double LON(LON) ; LON:_FillValue = NaN ; LON:long_name = "longitude" ; LON:units = "degrees_east" ; LON:axis = "X" ; double din_riv_flux(time, nlat, nlon) ; din_riv_flux:_FillValue = 9.96920996838687e+36 ; din_riv_flux:long_name = "River flux of DIN" ; din_riv_flux:units = "nmol/cm^2/s" ;
(same is true on the JRA025
grid). Any chance you could regenerate these without nlat
and nlon
? I can't figure out the NCO magic needed to fix this, ncrename -d nlat,LAT -d nlon,LON
doesn't seem to be the right tool for the job:
ERROR: nco_rename_dim() cannot define dimension name "LAT" which is already in use nco_err_exit(): ERROR Short NCO-generated message (usually name of function that triggered error): nco_rename_dim() nco_err_exit(): ERROR Error code is -42. Translation into English with nc_strerror(-42) is "NetCDF: String match to name in use" nco_err_exit(): ERROR NCO will now exit with system call exit(EXIT_FAILURE)
yes, I can fix this
Thanks! I also needed to add a couple attributes to the time
variable: axis = "T"
and cartesian_axis = "T"
-- if you're regenerating the files anyway, could we add those two, uh, too?
yep, sorry I missed those
River fluxes: I wrote a quick python function to generate the output I thought I needed:
def generate_updated_file(filename, inputdir, out_dir, varlist): ds_in = xr.open_dataset(os.path.join(inputdir, filename), decode_times=False) # 1. Construct new dataset from coords of second dataset coords = ["time", "LAT", "LON"] ds = ds_in[[var for var in coords if var != "time"]] # * Time coordinate needs this kludge for now # i. I don't know how to tell MOM to read from 1900 (POP uses shr_stream_year_start / shr_stream_year_align) so I subtract 1899 years from time dimension # ii. I don't know how to tell MOM to use the first value for dates before day 181.5, so I subtract 181.5 days ds["time"] = ds_in["time"] - 1899*365 - 181.5 # 2. Copy necessary variables over, dropping _FillValue encoding for var in varlist: ds[var] = xr.DataArray(ds_in[var].data, dims=["time", "LAT", "LON"]) ds[var].attrs = ds_in[var].attrs ds[var].encoding["_FillValue"] = None # 3. Update coordinate attributes # i. Make sure they match original file # ii. Drop _FillValue encoding # iii. Add necessary attributes for time dimension for var in coords: ds[var].attrs = ds_in[var].attrs ds[var].encoding["_FillValue"] = None # Some fixes for time ds["time"].attrs["axis"] = "T" ds["time"].attrs["cartesian_axis"] = "T" # 4. Write to disk ds.to_netcdf(os.path.join(out_dir, filename), unlimited_dims="time", format="NETCDF3_64BIT") print(f"DONE: {filename}")
And I think I have all the infrastructure in place to read in the file and apply fields to the STF. However, it looks pretty far off from the POP datasets. Comparing /glade/p/cesmdata/cseg/inputdata/ocn/pop/gx1v7/forcing/riv_nut.gnews_gnm.rx1_to_gx1v7_nn_open_ocean_nnsm_e1000r300_marginal_sea_170413.20190602.nc
and /glade/work/mclong/cesm_inputdata/riv_nut.gnews_gnm.rx1_to_tx0.66v1_nnsm_e1000r300_190315.20210401.nc
shows that fields like alk_riv_flux
differ by a couple orders of magnitude. Is this expected since we're using the smooth map in the open ocean rather than nearest neighbor? I'm still seeing very low tracer values at the mouth of rivers, and the "MOM with runoff" output is much more similar to the "MOM without runoff" output than the POP output.
MOM run without runoff: /glade/scratch/mlevy/C1850MOMECO.057/run
MOM run with runoff: /glade/scratch/mlevy/C1850MOMECO.063/run
POP run: /glade/scratch/mlevy/C1850ECO.pop_defaults/run
MOM monthly output for January is in {CASE}.mom6.hm_bgc_monthly_0001_01.nc
and {CASE}.mom6.hm_bgc_monthly_z_0001_01.nc
(former has 2D fields, latter has 3D)
New files here /glade/work/mclong/cesm_inputdata
Note that the flux units have changed, now in mmol/m^2/s.
@Gustavo M Marques or @Alper Altuntas -- it looks like something has changed with the Workflows portion of https://github.com/NCAR/MOM6 -- my PR says "First-time contributors need a maintainer to approve running workflows" even though the workflow ran automatically for the previous two commits.
Also, when I run the Expression Verification test, I'm getting a lot of failures like
work/tc1.b/symmetric/chksum_diag work/tc1.b/dim.z/chksum_diag differ: byte 103194, line 1245 1245,1248c1245,1248 < h-point: mean= 5.3175737620383399E+06 min= 2.2836111531271545E+03 max= 2.4117120854396626E+07 ocean_model-T_zint < h-point: c= 2316 ocean_model-T_zint < h-point: mean= 4.4396160250802070E+05 min= 2.2836111531271545E+03 max= 2.5004806607685145E+06 ocean_model-T_zint_100m < h-point: c= 2382 ocean_model-T_zint_100m --- > h-point: mean= 2.5964715634952831E+03 min= 1.1150445083628684E+00 max= 1.1775937917185853E+04 ocean_model-T_zint > h-point: c= 2372 ocean_model-T_zint > h-point: mean= 2.5964715634952831E+03 min= 1.1150445083628684E+00 max= 1.1775937917185853E+04 ocean_model-T_zint_100m > h-point: c= 2372 ocean_model-T_zint_100m 1309,1312c1309,1312 < h-point: mean= 4.2729781503789179E+07 min= 1.8112499999999993E+04 max= 1.8646968871361169E+08 ocean_model-S_zint < h-point: c= 3052 ocean_model-S_zint < h-point: mean= 1.5499771875000000E+06 min= 1.8112499999999993E+04 max= 3.6225000000000009E+06 ocean_model-S_zint_100m < h-point: c= 2690 ocean_model-S_zint_100m --- > h-point: mean= 2.0864151124897060E+04 min= 8.8439941406249964E+00 max= 9.1049652692193209E+04 ocean_model-S_zint > h-point: c= 2980 ocean_model-S_zint > h-point: mean= 2.0864151124897060E+04 min= 8.8439941406249964E+00 max= 9.1049652692193209E+04 ocean_model-S_zint_100m > h-point: c= 2980 ocean_model-S_zint_100m FAIL: Diagnostics tc1.b.dim.z.diag have changed.
I'm not sure what part of the code I could have inadvertently modified to cause this kind of failure -- any ideas on where I should look? Or are my baselines out of date somehow? I merged dev/ncar_20210409
onto my branch and it looks like there's been more activity to dev/ncar
since then.
The change came in when we merged the main branch. MOM6 used to be tested via Travis and now this is done via Github actions. Should I click on "Approve and run"?
I clicked on "Approve and run" and the tests are now running.
On your section question, perhaps your baseline is out of date. Maybe merging ned/ncar will fix these failures.
@Gustavo M Marques actually, it looks like it's probably a bug on my end -- _zint
and _zint_100m
are new diagnostics I introduced to the tracer package, and I suspect I'm not doing the unit conversion correctly (but at least it's being caught by the testing!)
@Michael Levy, @Matt Long , @Keith Lindsay
Here is the geothermal module for MOM6.
I'm trying to remove what we've been calling the "KMT kludge" -- to avoid issues with vanishing layers at the bottom of the column, we had told MARBL that the number of columns is the index of the deepest layer that is more than 1 cm thick (so if layers 64 and 65 were extremely thin, we told MARBL that kmt = 63
). This was necessary because we were converting a bottom flux (sinking matter hitting the sea floor and being remineralized) to a tendency by simply dividing by dz in the bottom layer. We now have a better scheme for that computation based on a pre-computed vertical diffusion profile so I tried to run with every column (using kmt = GV%ke
) but got the typical marbl_co2calc_mod.F90
error that I usually associate with a CFL violation:
284:WARNING from PE 212: (Task 212) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) it = 4 284:WARNING from PE 212: (Task 212) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x1,f = 0.7752463E-008-0.2265376E-004 284:WARNING from PE 212: (Task 212) MARBL WARNING (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) x2,f = 0.1228683E-004-0.2126109E-002284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): bounding bracket for pH solution not found 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) dic = 0.2097770E+004 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) ta = 0.2325961E+004 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) pt = 0.9106563E+000 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) sit = 0.4590227E+001 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) temp = 0.9627328E+001 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:drtsafe): (marbl_co2calc_mod:drtsafe) salt = 0.3368320E+002 284:WARNING from PE 212: (Task 212) MARBL ERROR (marbl_co2calc_mod:comp_htotal): Error reported from drtsafe
This is happening in /glade/work/mlevy/codes/CESM/cesm2_3_alpha02d+mom_marbl/cases/C1850MOMECO.079
(rundir is /glade/scratch/mlevy/C1850MOMECO.079/run
); 9332129 was run with max_bracket_grow_it = 3
, and I'm currently rebuilding with max_bracket_grow_it = 5
but if anyone has ideas about other parts of the code that may be causing problems by dividing by dz
when dz
is close to zero I'm not sure where else to look for issues.
9333054 is running with the increased max_bracket_grow_it
, and it's gotten through a few days already (9332129 aborted in the first day) so it may have just taken an extra iteration or two in a thin layer to get past this. :fingers_crossed:
This is encouraging. A class of things, but not the only things, to look at are the BGC values in the thin layers, to see if they have values considerably different from the values in adjacent non-thin layers. Could you do a multi-month run with BGC 3D variables on the "ocean_model" vertical axis, instead of "ocean_model_z".
This is encouraging. A class of things, but not the only things, to look at are the BGC values in the thin layers, to see if they have values considerably different from the values in adjacent non-thin layers. Could you do a multi-month run with BGC 3D variables on the "ocean_model" vertical axis, instead of "ocean_model_z".
@Keith Lindsay I set up /glade/scratch/mlevy/C1850MOMECO.080/run
to run for six months -- there will still be a file with the hm_bgc_monthly_z
stream name, but the fields inside that file will be written on the native grid rather than vertically interpolated (I figured for a one-off experiment, it would be easier to explain the inconsistent file name than try to remember how to move variables between streams). It's currently in the queue, and I'll keep an eye on it to make sure it actually runs
I squeezed in a few more runs before the NWSC shutdown this week; /glade/scratch/mlevy/C1850MOMECO.085/run
has 6 months of output, with monthly 3D variables output both on the model grid and regridded to the standard vertical grid (mom6.hm_bgc_monthly_z_YYYY_MM.nc
and mom6.h_bgc_monthly_z_YYYY_MM.nc
, respectively). This code does not have the KMT kludge in it, and also updates the default value of MIX_BOUNDARY_TRACER_ALE
to force a minimum diffusivity in vanishing layers at the bottom of columns... so the weights we pass to MARBL are reasonable.
The bad news is that I can't get MOM+MARBL running on izumi, so I don't think there's much I can do while cheyenne is down. I'll poke through my logs on izumi and see if I can figure out what the problem is, but even after increasing NTASKS_OCN
(the default PE layout was only running MOM on 144 cores) I couldn't get through a model day. @Keith Lindsay -- given the improvement we saw by updating MIX_BOUNDARY_TRACER_ALE
, can I merge #381 into MARBL?
Some plots looking at the effect of changing MIX_BOUNDARY_TRACER_ALE
are in a notebook on github
Continuing the investigation into differences in the Fe
tracer between MOM and POP, I ran without any river fluxes and without iron sediment flux... still seeing a big difference in iron at the surface, especially in the tropics (this is an average over a one-year run, hastily done without weighting by days per month):
Looking at maps of surface values, the west coast of Africa stands out as a big difference. POP has z_t
in the title, MOM has z_pop_l
:
It looks like there are differences in the IRON_FLUX
field. POP map follows zonal mean, MOM map is last:
IRON_FLUX-zonal-mean.png
IRON_FLUX-POP.png
IRON_FLUX-MOM.png
edit: weird, I was expecting the images to be inlined rather than linked.
Thanks @Michael Levy—this makes sense. I presume you can track down where these differences arise in the driver setup/configuration. We typically run with driver_derived
forcing and there are multiple places in POP involved in the IRON_FLUX
computation.
Matt Long said:
Thanks Michael Levy—this makes sense. I presume you can track down where these differences arise in the driver setup/configuration. We typically run with
driver_derived
forcing and there are multiple places in POP involved in theIRON_FLUX
computation.
Yeah, it looks like I'm using C-compset defaults for a few parameters that have different settings in POP for G compsets:
<dust_ratio_thres>60.0</dust_ratio_thres>
<dust_ratio_thres ocn_coupling="partial">69.00594</dust_ratio_thres>
<fe_bioavail_frac_offset>0.01</fe_bioavail_frac_offset>
<fe_bioavail_frac_offset ocn_coupling="partial">0.0146756</fe_bioavail_frac_offset>
<dust_ratio_to_fe_bioavail_frac_r>170.0</dust_ratio_to_fe_bioavail_frac_r>
<dust_ratio_to_fe_bioavail_frac_r ocn_coupling="partial">366.314</dust_ratio_to_fe_bioavail_frac_r>
I'm pretty sure I can change these via user_nl_mom
, so I'll do that and re-run and see how things look
Adding the following to user_nl_mom
:
! G-compset default values
DUST_RATIO_THRES = 69.00594
DUST_RATIO_TO_FE_BIOAVAIL_FRAC = 2.729898e-3 ! 1 / 366.314
FE_BIOAVAIL_FRAC_OFFSET = 0.0146756
seems to have made a huge difference. The 1-year MOM case is still running, but here are updated zonal means over the first 7 months [averaging POP over 7 months as well]:
Fe-zonal-mean-2.png
IRON_FLUX-zonal-mean-2.png
I'll update both the MOM and POP runs to add rivers back in and see how things look, and then a third round with sediment flux as well (assuming the rivers look good)
That's a great improvement! Do you have a sense of what might explain the Arctic discrepancy?
Do you have a sense of what might explain the Arctic discrepancy?
A few theories, but I haven't had a chance to really investigate. This is just the top layer in the plots, so one thing I can do is make these plots for deeper layers... or even just look at the column-integrated iron instead of surface values. I'm definitely open to suggestions on where to look next :)
I was wondering whether there is differing Fe treatment in ice—but these are C cases, right?
the seasonality would't make sense either, as the SH went thru winter in the 7 mon integration
Matt Long said:
I was wondering whether there is differing Fe treatment in ice—but these are C cases, right?
It could be ice - these are G cases (and one of the big differences between these runs is that POP is still using MCOG; I don't know if that's a possible explanation for iron differences but I could re-run the POP case with lmcog = .false.
just to minimize differences between the runs)
hmm... MCOG has the most substantial impact on spring productivity, which consumes Fe—so indeed, I'd expect that to generate diffs. In coupled configurations, dust fluxes deposited on ice are retained, then subsequently released from the ice to the ocean. I can't remember how this works in G cases. You could consider looking directly at the respective coupler history files.
I'm almost done with re-running POP without MCOG, but here's the Fe zonal mean comparison after 8 months:
Fe-8-month-zonal-mean-no-mcog.png
That's definitely much closer than before, though there is still some separation between the two
that does look better. Worth checking the forcing fields. Does the global integral of IRON_FLUX
match? Could put on a common grid to look at diffs: if the sea ice zone pops out, we'll know there's a problem there.
For the global integrals, I multiplied da*dz*area
(where dz
was in units m
and area
in m^2
), took the mean over time
, and then summed over the remaining dimensions. I multiplied by mmol_Fe_to_Tg = 55.845e-15
for unit conversion; do
MOM global sum of Fe: 58.43 Tg
POP global sum of Fe: 59.76 Tg
seem like reasonable quantities or am I off by several orders of magnitude? Assuming these are in the right ballpark, it's a 2-3% difference between them. I'll work on getting them to a common grid to look at the difference map
A couple more variables of interest are the dust and black carbon fluxes coming in from CICE -- these are all in g/cm^2/s
(I converted the MOM output from kg/m^2/s
).
SEAICE_DUST_FLUX_CPL-zonal-mean.png
SEAICE_BLACK_CARBON_FLUX_CPL-zonal-mean.png
Also, I just realized I computed global integral of Fe
, not IRON_FLUX
... I can get that in just a minute
The iron fluxes are also a little different (intuitively these numbers look so small that I think I have to have messed up the unit conversion... but whatever I did wrong, I did wrong with both models)
MOM global sum of IRON_FLUX: 16.95 kg
POP global sum of IRON_FLUX: 17.06 kg
I would expect small differences as the grids are different. Those look close enough to me.
The last thing I checked was ice fraction... interestingly enough, there are noticeable differences in the southern hemisphere but the northern hemisphere looks pretty close:
I'll continue with the plan of adding rivers back in, and making sure things still look close before putting the sediment flux back in as well
difference in sea ice is not surprising—I think we saw substantial differences in Southern Ocean tracers indicating that the models have quite different representations of circulation in that region.
Using FESEDFLUX
forcings that I generated by mapping sedfrac
, POC
, UVEL
, and VVEL
directly to the MOM grid (rather than computing sediment flux on the POP grid and mapping to MOM after the fact), iron is looking much better! This is with river fluxes back in the run as well as the sediment flux, so basically out-of-the-box (I still need to figure out how to hard-code the right parameter set for the GMOM
cases):
yet-another-Fe-zonal-mean.png
zonal-mean-of-FESEDFLUX.png
Alkalinity doesn't look so bad in this run, either -- this is just the average over the first year (and surface values), but I think in previous runs this looked far worse (in the year 16-20 average):
Should I put together another 20 year run to see if (a) Fe really looks better, and (b) if Alkalinity diverges at some point?
@Keith Lindsay I looked back on my notes from the plots you shared on Oct 26 (at the ocean BGC group meeting), and I had written down
ALK: big differences at 60 S in year 1 at surface! sizeable surface differences in year 20.
I'm having trouble generating plots that show these differences, though - if the code to generate those plots are still available, could you share the year-1 alkalinity plot? I do notice a little separation between the two lines in my zonal-mean-of-ALK.png
, so maybe it's something that is more obvious with a log scale for the y
or even just a different range of y
values?
Also, mom_oob.004
is running -- it has all the changes that led to the plots in my previous post. So far it's early in year 0003; I think it's on track to finish running sometime Friday? I'll be on PTO that day, but will compare to pop_control
and pop_no_mcog
on Monday.
I saw the diff in year 1, but it was less in year 20. Here it is, zooming in on the southern hemisphere:
pop_mom.003_alk_surf_0001.png
I'm now suspecting that this is from the sea ice initial condition.
I think the GMOM compset is starting off with uniform sea ice south of 60S.
When it melts, we're diluting ALK and DIC. I see a similar signal in DIC, though it is superimposed on the temperature dependent solubility.
that makes a lot of sense
Thanks @Keith Lindsay! Zooming in, I do see a similar separation between roughly 65 S and 55 S in my plots. This sounds like something beyond the scope of fixes we can make, but I can chat with Dave B and see if there's a timeline for improving the initial state / anything easy we can do in the meantime. Maybe I could convert the ice state at year 20 from one of our runs to a new initial condition file? I assume that wouldn't be completely spun up to equilibrium, but it might be better than what we have now
mom_oob.004
has run for 20 years, and output is in /glade/scratch/mlevy/archive/mom_oob.004/
. Looking at Fe output, there is a clear improvement over mom_oob.003
but still some differences when compared with POP. Things look pretty good after a single year (these plots are all annual means):
fe-zonal-mean-0001-003-and-004.png
The average over five years show huge increases in iron in the arctic / subarctic for 003, but now a loss of iron there (relative to both the POP output and the year 1 output) for 004 (second plot is just the pop case and MOM 004 to get back to reasonable y-axis values):
fe-zonal-mean-0005-003-and-004.png
fe-zonal-mean-0005-004.png
None of the three runs show much difference between year 5 and year 20, but here is the 004 vs pop plot:
From my notes, the changes made to 004 from 003 are
Update RIV_FLUX file (previous file did not have proper unit conversion / time dimension)
Fix vertical levels for POP history output (one level was still in cm instead of m)
Better masking of values where ref_depth >= bathyT
Get better PE layout OOB
Don't allocate memory for MARBL diags that MOM is not accumulating
Better dust -> iron parameters (I had hard-coded POP defaults for B / C compsets into MOM, but POP uses different values for G compsets)
Update FESEDFLUX files (map sedfrac, POC, UVEL, and VVEL to MOM grid and then compute sedflux rather than mapping sedflux from POP to MOM)
@Keith Lindsay if you have a chance, can you poke around the 004 output ahead of the MARBL meeting tomorrow afternoon? I hope to discuss the iron differences some, and then also have time on the agenda to look at the rest of the output and see if anything else needs our attention. I think I mentioned this above, but output through year 20 is in /glade/scratch/mlevy/archive/mom_oob.004/
.
I'll take a look at 004 before tomorrow's meeting.
I'll take a look at 004 before tomorrow's meeting.
Thanks Keith!
A minor detail, but the plots I described as "year 5" yesterday are actually from year 6... I was using range(60,72)
rather than range(48,60)
A major detail is that they are a one-year average from year 6, not "[t]he average over five years" (I meant "the average over year 0005" when I thought that was the year I was averaging over; I get it closer to correct later in the post when I mention "[n]one of the three runs show much difference between year 5 and year 20")
Just as @Keith Lindsay expected, changing the CICE initial condition (I used the restart written at the end of year 20 of mom_oob.004
) had a huge effect on salinity and DIC in the first year:
salinity-zonal-mean-year-1.png
DIC-zonal-mean-year-1.png
For the first plot, I didn't have so
in the output stream of mom_oob.004
so I couldn't include it in the comparison. For the second plot, the case named mom_oob.004.single_year
is misnamed - that's mom_oob.004
with better initial conditions (all three lines are from the first year).
I think the only other big issue we discussed yesterday was getting the KPP mixing configured correctly (including nonlocal terms in passive tracers in MOM) - should we try to get a fix for that before starting the next 20 year run? There's also the minor issue of updating MARBL to put sinking particle fluxes on layer interfaces instead of the center.
We discussed implementing a scaled salt flux to restore DIC and ALK, but it's not clear to me if that is necessary for the next 20 year run, or if that is something to consider if better ice initial conditions don't improve those two tracers.
In my opinion, we don't need to run a new experiment with every incremental update. I suggest that we spend more time examining this experiment and looking for issues to correct before starting a new experiment. So I propose working on including the KPP nonlocal term in passive tracers.
Regarding the scaled salt fluxes, I think it separates into salt fluxes from the CICE model and salt fluxes from salinity restoring.
On the 1st category of salt fluxes, I spoke @Marika Holland about including DIC and ALK tracers in the sea ice model. She thinks this definitely doable. It seems to us like it would be similar to the inclusion of nutrients in CICE, something that E3SM has experience with, while we don't have in-house experience with it. So this might take some time to implement. I'll try to chat w/ Elizabeth Hunke about this at lunch tomorrow. She is in Boulder for the CICE hackathon. The best person to chat with about this is probably Nicole Jeffery at LANL.
I think we should proceed with adding scaled salt fluxes from salinity restoring to DIC and ALK surface fluxes.
I suggest Mike and I meet to develop a roadmap for the KPP nonlocal term and scaled salt fluxes from salinity restoring.
Sounds great, @Keith Lindsay !
It does look like there's a unit issue in the river flux file that I'll try to sort out this afternoon or tomorrow:
Global integral of DON_RIV_FLUX from POP output: 9.461e+12 mmol/m^3 cm/s
Global integral of don_riv_flux from POP forcing: 1.051e+13 nmol/cm^2/s
----
Global integral of DON_RIV_FLUX from MOM output: 9.464e+06 mmol/m^3 m/s
Global integral of don_riv_flux from MOM forcing: 1.050e+07 mmol/m^2/s
For POP, even though the units in the output differs from that in the forcing file, 1 mmol / m^3 = 1 nmol / cm^3
so 1 mmol / m^3 cm/s = 1 nmol / cm^2 /s
(multiply both sides of the equation by 1 cm/s
and reduce). For MOM, I would expect values roughly O(1e11 mmol / m^2 / s
) (taking the POP units, and converting the cm/s
to m/s
) so I think we are off by four orders of magnitude in the actual generation of the MOM forcing file.
Michael Levy said:
It does look like there's a unit issue in the river flux file that I'll try to sort out this afternoon or tomorrow:
Global integral of DON_RIV_FLUX from POP output: 9.461e+12 mmol/m^3 cm/s Global integral of don_riv_flux from POP forcing: 1.051e+13 nmol/cm^2/s ---- Global integral of DON_RIV_FLUX from MOM output: 9.464e+06 mmol/m^3 m/s Global integral of don_riv_flux from MOM forcing: 1.050e+07 mmol/m^2/s
For POP, even though the units in the output differs from that in the forcing file,
1 mmol / m^3 = 1 nmol / cm^3
so1 mmol / m^3 cm/s = 1 nmol / cm^2 /s
(multiply both sides of the equation by1 cm/s
and reduce). For MOM, I would expect values roughly O(1e11 mmol / m^2 / s
) (taking the POP units, and converting thecm/s
tom/s
) so I think we are off by four orders of magnitude in the actual generation of the MOM forcing file.
I just realized this is incorrect -- the difference between the two is that I am integrating, so the POP units should be multiplied by cm^2
(units of TAREA
) while the MOM units should be multiplied by m^2
(units of area_t
). So I think everything actually matches up
I updated my MARBL driver sandbox to cesm2_3_beta08
, got the NUOPC driver working (I think - at the very least, it is running without crashing), and then updated the MARBL driver to use the KPP nonlocal terms from NCAR/MOM6#202. I ran into issues with the JRA repeated-year forcing, so I'm running with JRA interannual forcing.
POP baseline is archived in /glade/scratch/mlevy/archive/g.e23b08.TL319_g17.G1850POPECO_JRA.no_mcog
-- I ran for 24 years, though the MOM companion is only set up to run for 20... running POP in 6 year chunks let me spit out restart files every 3 years
My first attempt at running the MOM case didn't provide output on the POP vertical grid and had the wrong default values for a few MARBL parameters (POP uses different defaults for G compsets than C/B for DUST_RATIO_THRES
, DUST_RATIO_TO_FE_BIOAVAIL_FRAC
, and FE_BIOAVAIL_FRAC_OFFSET
but the updated values don't get added to MOM_override
yet), so my second attempt is still running.
The first 8 years are in /glade/scratch/mlevy/archive/g.e23b08.TL319_t061.G1850MOMMARBL_JRA.002
and I'm seeing about 9.5 SYPD so I'm hoping to have 20 years by mid-afternoon tomorrow :fingers_crossed:
@Keith Lindsay and @Matt Long I'm hoping we can spend some time at our 3:30 meeting looking at the early output - I'm still noticing huge signatures from river runoff in fields like ALK
and DIC
, but that's expected / possibly the next issue to address? For example here's the Dec 0008 surface DIC on the same color bar (1150 -> 4350 mmol / m^3; the MOM minimum is 300 mmol / m^3 but only in some coastal regions):
By surface, I mean "top level of POP output" and "top level of MOM output when writing diagnostics on the POP vertical levels"... I can also provide "top level of MOM output" if that is a more meaningful comparison, but visually it's very similar to what is shown above
The 20-year MOM run finished... so there is now output for an updated MOM vs POP comparison
/glade/scratch/mlevy/archive/g.e23b08.TL319_g17.G1850POPECO_JRA.no_mcog
/glade/scratch/mlevy/archive/g.e23b08.TL319_t061.G1850MOMMARBL_JRA.002
The diagnostic DUST_FLUX_IN
is O(10) larger in MOM than in POP. I'm suspicious about the conversion of dust flux from kg/m^2/s to MARBL's CGS units. The MOM MARBL driver is multiplying by kg_m2_s_conversion= kg_m2s_to_RZ_T
. I think this is just taking care of dimension scaling, but it doesn't take care of MKS->CGS. I think we need to also be multiplying by 0.1.
I think the scaling of IRON_FLUX
is correct.
The impact on Fe
is complicated. Increased dust flux enhances scavenging, which I see in the output. However, the remineralization of dust also adds Fe to the system.
POC_FLUX_100m
is similar in MOM and POP, but pocToSed
is about 46% larger in MOM than in POP, in year 1. This might be because of an increase in ballasting from excessive dust.
MARBL generates a surface flux of PO4
and SiO3
from dust flux. So those are increased.
MARBL-MOM6 expr 006, through year 4, looks improved compared to 005. Here are maps of Arctic Fe over the top 10m in 004, 005, 006, and POP, all with MCOG:
image.png
Global sinking Fe flux at 100m is now lower in 006 than POP, while previous MOM experiment were higher than in POP.
image.png
I'll need to dig in more.
The left column is tracer concentration. The right column is tracer concentration normalized to salinity=35.
Last updated: May 16 2025 at 17:14 UTC