@Allison Baker, we have re-run a portion of the CESM-LE without Mt. Pinatubo forcing, but we're seeing larger-than-expected differences (based on naive assumptions) in some of the fields prior to when our modified forcing kicks in. @Amanda Fay, has some plots that she can share. I am wondering if you have perspective on how big differences attributable to machine diffs should be.
The data from these integrations are here:
/glade/campaign/univ/udeo0005/cesmLE_no_pinatubo
The runs begin in 1990-01-01; our forcing modification kicks in mid-1991. Therefore, the diffferences during 1990 should be minimal between our experiment and the original. We're planning to look at some aggregate differences for some of the hi-freq atmosphere fields.
Do you have suggestions for what to look at and how big expected differences should be?
Is there any other info we want for the pinatubo intake-esm catalog?
{'component': 'atm', 'stream': 'cam.h1', 'case': 'b.e11.BRCP85C5CNBDRD_no_pinatubo.f09_g16.022', 'frequency':'daily', 'ensemble_member': 22, 'variable': 'QBOT', 'start_time': '2006-01-01', 'end_time': '2025-12-31', 'long_name': 'Lowest model level water vapor mixing ratio', 'units': 'kg/kg', 'vertical_levels': 1, 'path': '/glade/campaign/univ/udeo0005/cesmLE_no_pinatubo/atm/proc/tseries/daily/QBOT/b.e11.BRCP85C5CNBDRD_no_pinatubo.f09_g16.022.cam.h1.QBOT.20060101-20251231.nc'}
Should we change ensemble_member to member_id?
Should we change ensemble_member to member_id?
Yes, this change would be consistent with our other catalogs.
Is there any other info we want for the pinatubo intake-esm catalog?
{'component': 'atm',
'stream': 'cam.h1',
'case': 'b.e11.BRCP85C5CNBDRD_no_pinatubo.f09_g16.022',
'frequency':'daily',
'ensemble_member': 22,
'variable': 'QBOT',
'start_time': '2006-01-01',
'end_time': '2025-12-31',
'long_name': 'Lowest model level water vapor mixing ratio',
'units': 'kg/kg',
'vertical_levels': 1,
'path': '/glade/campaign/univ/udeo0005/cesmLE_no_pinatubo/atm/proc/tseries/daily/QBOT/b.e11.BRCP85C5CNBDRD_no_pinatubo.f09_g16.022.cam.h1.QBOT.20060101-20251231.nc'}
That looks like a good start to me!
@Matt Long Sorry for the delay - I was on vacation the last 2 weeks and am now trying to catch up :) Which machine were your re-run simulations done on? (Cheyenne?)
Also I don't have permission to access the data directory: /glade/campaign/univ/udeo0005/cesmLE_no_pinatubo
@Allison Baker I own the files in that directory, and it looks like everything is world readable except maybe /glade/campaign/univ/udeo0005
itself?
$ ls -ld /glade/campaign/univ/udeo0005 /glade/campaign/univ/udeo0005/cesmLE_no_pinatubo drwxrws--T+ 3 root udeo0005 4096 Dec 16 21:49 /glade/campaign/univ/udeo0005 drwxr-sr-x+ 9 mlevy udeo0005 4096 Jan 11 14:21 /glade/campaign/univ/udeo0005/cesmLE_no_pinatubo
unfortunately, that directory is owned by root
. I'm not familiar with how to override parent directory permissions, it seems like users get the more restrictive of parent directory / setfacl
permissions? But we do want this output to be available to users beyond those directly involved in the project so let me submit a CISL ticket to add +rX
to that directory
@Michael Levy I can't "cd" into the directory or "ls" the contents
I ran into this problem earlier when @Max Grover couldn't read the files -- in that case, the solution was to add him to the udeo0005
group but I asked about the default permissions and Mick said
CISL restricted write permissions on Campaign Storage directories last year after numerous complaints from users that their data had been accidentally deleted or corrupted by other users. I'll be glad to ask the storage admins to open read and/or write permissions on /glade/campaign/univ/udeo0005 if that's what you prefer
At the time I did not ask him to open read permission on the directory, but I did just submit an issue ticket for that. Once you get in to that directory, you'll probably still have trouble with /glade/campaign/univ/udeo0005/cesmLE_no_pinatubo/atm/proc/tseries/hourly6
, because that softlinks to /glade/campaign/cgd/oce/people/mlevy/pinatubo_spillover/hourly6
and there are similar problems with permissions on /glade/campaign/cgd/oce/people/
(@Matt Long, when you get a chance can you run chmod o+rX /glade/campaign/cgd/oce/people
?)
@Michael Levy ok, thanks for doing this. Also I assume you have the run dirs somewhere to know what compiler options and flags and version that you used (to compare with the original ones for CESM-LE). One thing I would want to know is whether the new simulations used FMA or not....
it looks like rundirs might have been scrubbed already, but the build logs got copied into case root: /glade/work/mlevy/pinatubo_cases/b.e11.BRCP85C5CNBDRD_no_pinatubo.f09_g16.025/logs/bld
I don't think I turned FMA off - not sure if I'm rooting for it to be off by default in the tag, or for an easy fix to the discrepency...
Actually, we talked about FMA back in December, it at that time it looked like it was off by default: https://zulip2.cloud.ucar.edu/#narrow/stream/36-pinatubo-LE/topic/CESM-LE.20integration/near/22602
Right - now I remember having that discussion....
@Allison Baker
Right - now I remember having that discussion....
I had forgotten about them too, but your question about FMA rang a bell and I searched my email. I think that brought up the google calendar invite for a chat we had, then I went looking for other communication from around the same date :)
@Allison Baker I did hear back from CISL, and the output on campaign should be world-readable now. When you get a chance, can you please let me know if you can access the time series output now? cam 6-hourly output may still be off-limits, but that should also be available by early next week
@Michael Levy I can get to the monthly and daily now - thanks!
So to clarify what I am looking at, it sounds like you all are concerned that the differences between this new data in year 1990 and CESM-LENS 1990 data are larger than you expected.
Also, which module or variables are you most concerned about? @Amanda Fay do you have plots to share? I will start by looking at CAM, but any additional info would be helpful.
I'll forward you some of Amanda's slides, but looking through them the two monthly ocean fields were SST
(I think you'll need to read TEMP
and just look at the top level of output) and pCO2SURF
. Once we get you access to the CAM 6-hourly data, she also includes plots from TS
and PS
thanks Mike- theres some more plots in the ppt in the drive you have access too. But I looked at rmse for 6hrly Ts and Ps. We see that at time 0 it's near 0 but escallates and then levels out after a few days.
to be clear, these are LENS-Nopinatubo run for 3 ensmb members
im headed out on holiday tomorrow so wont have time to look at this much until the end of June but eager to hear your thoughts @Allison Baker
Thanks @Amanda Fay - enjoy your vacation!
@Michael Levy I'm going to propose a google chat with you tomorrow, if that's ok, just to clarify a few things :)
@Allison Baker -- chatting tomorrow sounds good, my calendar is up to date (just very empty)
Hi All, I am coming back online here. Is there any update on the validation effort? Do we trust that that integrations are correct?
Welcome back. No results yet, but I am working on it. I did a bunch of runs over the weekend but now need to look at them. Basically I am using POP-ECT and generated an ensemble based on your no-pinatubo run 001 with double precision perturbations to get an idea of the spread, as compiler spread would be similar - then I'll see how the original CESM-LENS run figures into this (all for year 1990) and go from there. I'll let you know when I have something for you all.
Wow! Thanks @Allison Baker! This is great!
@Michael Levy @Matt Long
Good news: looks like the differences that you are seeing are reasonable for changing the compiler (and machine).
Summary: I used POP-ECT to test the original CESM-LENS member 001
(year 1990) run on yellowstone against an ensemble that I constructed
on Cheyenne (via temp perturbations) to the 001 ensemble. (So this is
sort of the reverse of what we did when Cheyenne was introduced - in
that case the ensemble - or "control" case - was on yellowstone)
The POP-ECT issued a PASS based on calculating the Z-score (at each grid point) of the
yellowstone run compared to the ensemble (for TEMP, UVEL, VVEL, SSH,
and SALT). The full details on how this test works are in:
https://gmd.copernicus.org/articles/9/2391/2016/
The test is here: https://github.com/NCAR/PyCECT
Here is the 12 month POP-ECT result - this is very good:
Maybe more intuitive to look at also are these two plots that I made
for SST and SSH that show that the yellowstone run is well within the
range of ensemble values. This is all good as we gave found that the variation
attributable to an initial perturbation is similar to that induced by a
compiler change.
pinatubo_ens_sst.png
pinatubo_ens_ssh.png
So I only investigated member 001, but I have no reason to think that
the others would behave any differently. Let me know if you have questions, etc.
Thanks @Allison Baker!
I really really appreciate all your effort. This is a tremendous help!
@Amanda Fay, @Galen McKinley: Allison is the expert on this type of validation; based on her analysis, I think we can proceed with a warm fuzzy feeling that the runs have been conducted correctly. I am still surprised at the magnitude of the differences—but I don't really have any basis to expect the magnitude to be less than any particular value.
Last updated: May 16 2025 at 17:14 UTC