http://climate-cms.wikis.unsw.edu.au/api.php?action=feedcontributions&user=Hwolff&feedformat=atomclimate-cms wikis.unsw.edu.au - User contributions [en]2024-03-29T09:14:41ZUser contributionsMediaWiki 1.31.0http://climate-cms.wikis.unsw.edu.au/index.php?title=File:Rotate_hemispheres.sh&diff=309File:Rotate hemispheres.sh2019-01-27T15:13:40Z<p>Hwolff: </p>
<hr />
<div></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=File:Raijin-umui-general.png&diff=181File:Raijin-umui-general.png2019-01-27T15:08:42Z<p>Hwolff: </p>
<hr />
<div></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=File:Umuixjob.png&diff=180File:Umuixjob.png2019-01-27T15:08:36Z<p>Hwolff: </p>
<hr />
<div></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=File:Umuix_accessdev.png&diff=179File:Umuix accessdev.png2019-01-27T15:08:30Z<p>Hwolff: </p>
<hr />
<div></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=File:Black_saturday_fire_nesting.png&diff=143File:Black saturday fire nesting.png2019-01-27T15:07:50Z<p>Hwolff: </p>
<hr />
<div></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=File:Fortran_introduction.pdf&diff=115File:Fortran introduction.pdf2019-01-27T15:06:23Z<p>Hwolff: </p>
<hr />
<div></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS-ESM_1.5&diff=13ACCESS-ESM 1.52018-08-28T04:33:25Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>Temporary page while I'm getting familiar with ACCESS-ESM 1.5.<br />
<br />
Will be converted into a documentation later on.<br />
<br />
=Getting ACCESS-ESM1.5= <br />
<br />
There is some documentation on google drive, CMS > Coupled Models<br />
<br />
<syntaxhighlight><br />
git clone git@bitbucket.org:climate-cms/csiro-scripts.git<br />
cd csiro-scripts/original/build<br />
make<br />
</syntaxhighlight><br />
<br />
Turns out I needed access to the CABLE repo, see https://trac.nci.org.au/trac/cable/wiki/CableRegistration<br />
<br />
Other than that, compile worked fine.<br />
<br />
===Running first test=== <br />
<br />
copy the scripts <span style="font-family:monospace">PI-C2C-1p5r29*}} from the {{original</span> directory to a new experiment directory.<br />
<br />
Rename the scripts with<br />
<br />
<syntaxhighlight><br />
rename PI-C2C-1p5r29 test1 PI-C2C-1p5r29*<br />
</syntaxhighlight><br />
<br />
Make sure that the correct project is set in <span style="font-family:monospace">test1</span>:<br />
<br />
<syntaxhighlight><br />
#PBS -P w35<br />
</syntaxhighlight><br />
<br />
Change initial date and final date in <span style="font-family:monospace">test1.init</span>:<br />
<br />
<syntaxhighlight><br />
#-- Initial and Final Date of the Experiment<br />
iniyear=1; finalyear=1; typeset -Z4 iniyear finalyear<br />
inimonth=1; finalmonth=1; typeset -Z2 inimonth finalmonth<br />
iniday=1; finalday=5; typeset -Z2 iniday finalday<br />
</syntaxhighlight><br />
<br />
Then run<br />
<br />
<syntaxhighlight><br />
qsub test1<br />
</syntaxhighlight><br />
<br />
Link the <span style="font-family:monospace">build</span> directory to the experiment directory.<br />
<br />
====Issues==== <br />
<br />
The <span style="font-family:monospace">RUNID}} of {{PI-C2C-1p5r29}} was hardcoded in the script. I removed that hardcoded line and left the {{export RUNID=`basename $PBS_O_WORKDIR`</span> in because the experiment directory has the same name as the scripts.<br />
<br />
It didn't find the build directory in the experiment directory. I linked the build directory there.<br />
<br />
The model failed asking for a restart dump that didn't exist. Deleting the <span style="font-family:monospace">test1.date</span> file solved that.<br />
<br />
Didn't find <span style="font-family:monospace">create_rankfile.py}} -- copied from {{original}} directory. (Had to delete {{test1.date</span> again.)<br />
<br />
I also pre-emptively linked the <span style="font-family:monospace">runscripts}} directory from {{original</span> to my experiment directory.<br />
<br />
==How ACCESS works== <br />
<br />
===<RUNID>=== <br />
<br />
The actual sumit script does not have an extension, and is a <span style="font-family:monospace">ksh</span> script.<br />
It sets a whole lot of environment variables, most of them exported.<br />
<br />
It then sources, if present, <span style="font-family:monospace">runscripts/umprofile}} which also sets some variables, however, many of these variables don't seem to refer to a path that exists on {{raijin</span>.<br />
Then, again, if present, runs <span style="font-family:monospace">runscripts/setglobalvars</span>, again, the contents of which seem to be outdated.<br />
<br />
It sets a few more variables, then sources <span style="font-family:monospace"><RUNID>.init</span>.<br />
<br />
Many more variable declarations, then it sources <span style="font-family:monospace">UMScr_Toplevel</span>.<br />
<br />
Finally, it sources <span style="font-family:monospace"><RUNID>.fin</span><br />
<br />
===<RUNID>.init=== <br />
<br />
This script is *sourced* by <span style="font-family:monospace"><RUNID></span>, so it's still a ksh script.<br />
It begins with a function declaration, which sets certain ancillary files depending on the year.<br />
<br />
Then come many more variables, including:<br />
<br />
* <span style="font-family:monospace">CMIP5RUN}}, which can be any of these: {{picontrolv,historical,pi4xCO2,pi1pcntCO2,rcp45,rcp85,rcp26</span>.<br />
* <span style="font-family:monospace">nproc_ice</span> (12 currently)<br />
* <span style="font-family:monospace">oce_nx}} and {{oce_ny</span> for ocean decomposition (currently 12 and 4, respectively)<br />
* <span style="font-family:monospace">ntproc}} as the total number of cores for this job ({{UM_NPES}} + {{nproc_ice}} + {{(oce_nx * oce_ny)</span>)<br />
* <span style="font-family:monospace">iniyear}}, {{inimonth}}, and {{iniday}} as the initial date of the model, which is then compressed into {{inidate}} ({{YYYYMMDD</span>)<br />
* <span style="font-family:monospace">finalyear}}, {{finalmonth}}, {{finalday}} as the final date of the model, compressed into {{finaldate</span>.<br />
<br />
Then it looks for <span style="font-family:monospace"><RUNID>.date</span>:<br />
<br />
If it '''doesn't''' exist, then it assumes that it's a new run. It creates the file and dumps the initial dates in there.<br />
If the file '''does''' exist, it reads the date out of it (only last line).<br />
<br />
It then uses <span style="font-family:monospace">~access/bin/calendar_more</span> to calculate the times for this run (initial date, end date, first date of next run, et cetera).<br />
<br />
From this, it gets, amongst other things, the <span style="font-family:monospace">days_in_run}}, which it then multiplies by 86,400 (secs per day) to get {{runtime</span>.<br />
<br />
Next, the script creates it work and run directories, copying files there as needed. (Apparently, for historical runs, it also needs to change the dates in the UM files, changing the reference date from the 16th of the month to the first.)<br />
This is also where it uses the function it declared at the beginning.<br />
<br />
Finally, it makes several (currently obscure) changes to a lot of namelists, and then creates the <span style="font-family:monospace">ACCESSRUNCMD</span>.<br />
<br />
And it loads new modules, replacing others if they had been loaded before.<br />
<br />
===<RUNID>.fin=== <br />
<br />
Check whether variable <span style="font-family:monospace">FCODE</span> is 0 (presumably the return code of the Model run?), only then will it do anything.<br />
<br />
It calculates the name of the restart file <span style="font-family:monospace">restartfile="aiihca.da${umdate}"}} where {{umdate}} has been created by {{datetoum}} or {{datetoum2</span>.<br />
<br />
Next, it moves what I think are the coupling restart files, <span style="font-family:monospace">${cplrundir}/?2?.nc}} to the archive {{${archivedir}/restart/cpl/$resfile-${enddate}</span><br />
<br />
Then, it accesses <span style="font-family:monospace">$atmrundir}}: Validates the date of the restart file, moves it to {{${archivedir}/restart/atm/${restartarch}}} (({{restartarch}} had been set to {{"${RUNID}.astart-${nextdate}"</span>).<br />
<br />
Next, it moves all UM output files (<span style="font-family:monospace">aiihca.p*</span>) to the archive dir, renaming them in the process.<br />
<br />
==Modifications== <br />
<br />
Created a new branch: <span style="font-family:monospace">holger_testing</span><br />
<br />
===MOM5 Version Control=== <br />
<br />
MOM5 is under version control: <span style="font-family:monospace">https://github.com/OceansAus/ACCESS-ESM1.5-MOM5.git}} - this was added to the {{Makefile</span>.<br />
<br />
===ummodel_hg3=== <br />
<br />
<span style="font-family:monospace">ummodel_hg3}} was still copied from {{/short/p66/txz599/ACCESSHOME/submodels/UM/ummodel_hg3/</span>. That directory itself is under version control, but with some non-checked-in.<br />
<br />
===umbase_hg3=== <br />
<br />
<span style="font-family:monospace">umbase_hg3}} was still copied. But there weren't any interesting changes in the directory compared the the repository. (Only {{fcm_env.sh}} and {{parsed_bld.cfg</span>)<br />
So I've changed it to point to the svn repo.<br />
<br />
===bld-hadgem3-mct.cfg===<br />
<br />
The file <span style="font-family:monospace">bld-hadgem3-mct.cfg}} was missing from the {{ummodel_hg3}} repo. For now, I've made it part of the access-esm repo, and added a line in the {{Makefile</span> to copy it over.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=Um-errors&diff=366Um-errors2018-03-16T03:44:06Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>=Typical UM Errors= <br />
<br />
''Note: This page is still a stub. Please add to this page.''<br />
<br />
==Errors during Extract== <br />
<br />
When compiling the UM, the first step is for the UMUI(x) to extract the source code. If an error occurs, check for these things:<br />
<br />
Step 1: Did the error occur during the base, model, or reconfiguration extraction?<br />
Step 2: Check for the extraction log, it is either in<br />
<syntaxhighlight><br />
~/UM_OUTPUT/<JOBID>/<umbase or ummodel or umrecon>/ext.out<br />
</syntaxhighlight><br />
or<br />
<syntaxhighlight><br />
/scratch/users/$USER/<JOBID>/<umbase or ummodel or umrecon>/ext.out<br />
</syntaxhighlight><br />
<br />
The final lines typically show at which command the error occurred.<br />
<br />
===SVN errors=== <br />
<br />
If you get a permission denied error from an SVN, it might be because svn needed your password, and didn't get it. The solution is to run<br />
<br />
<syntaxhighlight><br />
$ svn ls https://access-svn.nci.org.au/svn/um/branches/<br />
</syntaxhighlight><br />
<br />
If you get asked for a password, you need to supply your NCI password. And then it will ask you whether it should store it in plain text.<br />
As much as I hate to say it: you need to answer 'yes'.<br />
<br />
And immediately afterwards, you have to run the command:<br />
<br />
<syntaxhighlight><br />
$ chmod -R go-rwx $HOME/.subversion<br />
</syntaxhighlight><br />
<br />
===SCP and RSYNC errors=== <br />
<br />
If you get an error like this:<br />
<br />
<syntaxhighlight><br />
mkdir: cannot create directory `/abc123': Permission denied<br />
</syntaxhighlight><br />
<br />
(where abc123 is your username), the most likely explanation is that you haven't set the DATAOUTPUT environment variable on accessdev.<br />
If you are using bash, add this line to your $HOME/.bashrc file:<br />
<br />
<syntaxhighlight><br />
export DATAOUTPUT=/short/${PROJECT}/${USER}/UM_ROUTDIR<br />
</syntaxhighlight><br />
<br />
If you are using tcsh or csh, add this line to your $HOME/.login<br />
<br />
<syntaxhighlight><br />
setenv DATAOUTPUT /short/${PROJECT}/${USER}/UM_ROUTDIR<br />
</syntaxhighlight><br />
<br />
Then log out of accessdev and log back in, open up umuix again, and check whether the error went away.<br />
<br />
==Errors during Submit== <br />
<br />
==Errors during Compile== <br />
<br />
==Errors during Run== <br />
<br />
===<span id="pp-headers"></span>REPLANCA: PP HEADERS ON ANCILLARY FILE DO NOT MATCH=== <br />
<br />
The model is trying to update an ancillary field, but it is looking for a date that is past the end of the ancillary file. For instance, say an ancillary file is valid for dates between 1850-01-01 and 2001-01-01 and the model is trying to update fields for the date 2001-02-01. The model can't find this data in the file, so it crashes.<br />
<br />
Near the end of the .leave file there will be a block that looks like<br />
<br />
<syntaxhighlight><br />
STASH code in dataset 122 STASH code requested<br />
58<br />
'Start' position of lookup tables for dataset in overall lookup array<br />
481<br />
122 58 39<br />
UP_ANCIL : Error in REPLANCA.<br />
<br />
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
UM ERROR (Model aborting) :<br />
Routine generating error: UP_ANCIL<br />
Error code: 239<br />
Error message:<br />
REPLANCA: PP HEADERS ON ANCILLARY FILE DO NOT MATCH<br />
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
</syntaxhighlight><br />
<br />
The important bit here is ''''STASH code requested'''', which is in this case was 58. Internally the UM refers to each model field (U, V, temperature, etc.) by a numeric STASH code. You can check field is represented by a code by going to [[https://accessdev.nci.org.au/umdocs/7.3/stash_browse]]. The STASH codes are divided into different sections, for errors like this you'll find the variable in section '''0) Prognostic Variables'''. Scroll down the item list to find the name for the error code - the code 58 corresponds to SULPHUR DIOXIDE EMISSIONS.<br />
<br />
To find out which file this corresponds to you'll need to go back to the UMUI, and check the screen 'Atmosphere -> Ancillary and Input data files -> Climatologies'. It should be fairly simple to work out the entry to use (in our example you'd look at 'Sulphur-Cycle Emissions') but if you have trouble check with the helpdesk. You'll need to replace the file here with one that covers the next part of your model run.<br />
<br />
Its common for this error to occur when running an AMIP-type simulation and going past 2001, where the data changes from Historical to the RCP scenarios. If this is the case you'll need to change Ozone (files starting with SPARCO3), 2d Sulphur Cycle (sulp) Soot (BC_hi), Biomass (Bio), and OCFF (OCFF). You can find ancillary files for the different scenarios under /projects/access/data/ancil/CMIP5, or ask the CMS team at [[mailto:climate_help@nf.nci.org.au | climate_help@nf.nci.org.au]] and we'll help you locate the right ancillary file.<br />
<br />
===<span id="nlookup"></span>INANCILA: Insufficient space for LOOKUP headers=== <br />
<br />
This section is currently tested. Ask Holger if you still see this message after 1/6/2018.<br />
<br />
Error message is something along these lines:<br />
<br />
<syntaxhighlight><br />
Field: 128 OCFFEMIS<br />
Opening Anc File: /projects/access/data/ancil/CMIP5/OCFF_RCP85_1850_2100.N96<br />
No room in LOOKUP table for Ancillary File 47<br />
INANCCTL: Error return from INANCILA 14<br />
INANCILA: Insufficient space for LOOKUP headers <br />
<br />
Failure in call to INANCCTL<br />
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
UM ERROR (Model aborting) :<br />
Routine generating error: INITIAL<br />
Error code: 14<br />
Error message:<br />
INANCILA: Insufficient space for LOOKUP headers<br />
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
</syntaxhighlight><br />
<br />
The UM needs an initial guess on how big the ancillary files are. It needs to reserve a little bit of memory for every level at every time step of every field. This number can be set at<br />
<br />
Model Selection -> Atmosphere -> Ancillary and input data files -> In file related options >> Header record sizes.<br />
<br />
===DRLANDF1 : Error in FILE_OPEN=== <br />
<br />
Symptom: The UM aborts with the error message:<br />
<syntaxhighlight><br />
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
UM ERROR (Model aborting) :<br />
Routine generating error: UM_SHELL<br />
Error code: 1<br />
Error message:<br />
DRLANDF1 : Error in FILE_OPEN.<br />
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
</syntaxhighlight><br />
<br />
This indicates that the start dump (NRUN) or restart dump (CRUN) could not be opened. Search the leave file for the string "Unit 21". You should find a line like this:<br />
<br />
<syntaxhighlight><br />
OPEN: File <somefile> to be Opened on Unit 21 does not Exist<br />
</syntaxhighlight><br />
<br />
<span style="font-family:monospace"><somefile>}} might be empty, in which case there will be 2 spaces between "{{File}}" and "{{to be Opened</span>".<br />
Otherwise, check that the file exists and that you have read permissions.<br />
<br />
If this happens during an NRUN, check the settings in<br />
**Model Selection''' -> '''Atmosphere''' -> '''Ancillary and input data files''' --> '''Start dump'''<br />
Check the settings under "Specify initial dump"<br />
<br />
If the file name is an empty string, and you are running a CRUN, open<br />
**Model Selection''' -> '''Atmosphere''' -> '''Control''' -> '''Post processing, Dumping & Meaning''' --> '''Dumping and meaning'''<br />
and ensure that the restart dump frequency is consistent with your per-submission run length.<br />
(That is, if your run length is 1 year, you may have monthly or yearly restart dumps, but not 5-yearly restart dumps.)<br />
<br />
===Unable to WGDOS pack to this accuracy=== <br />
<br />
Symptom: The UM aborts with the error message:<br />
<syntaxhighlight><br />
????????????????????????????????????????????????????????????????????????????????<br />
???!!!???!!!???!!!???!!!???!!! ERROR ???!!!???!!!???!!!???!!!???!!!<br />
? Error code: 2<br />
? Error from routine: COEX (cmps_all)<br />
? Error message: Unable to WGDOS pack to this accuracy<br />
? Error from processor: 0<br />
? Error number: 85<br />
????????????????????????????????????????????????????????????????????????????????<br />
</syntaxhighlight><br />
<br />
There has been an error compressing one of the output files. Check the model output streams and 'Dumping and Meaning' sections, setting the 'packing profile' to 0 to turn off compression.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=Fire_Model&diff=144Fire Model2018-02-12T05:07:08Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>=How to run the Fire Model= <br />
<br />
This page is in development. Charmelle and I try to get a guide ready on how to run the fire model.<br />
<br />
==Overview== <br />
<br />
At its core, the fire model is a nested UM8.5 model with 5 layers, in which a fire model is coupled to Jules for the 2 highest-resolution runs.<br />
<br />
===Nesting=== <br />
<br />
To start, the global model is run from 10am the day before the fire until midnight after the fire.<br />
<br />
The output of one run is used to create the initial and boundary conditions for the next higher-resolution nest. There is no back-coupling to the lower resolution models, so each nest can be run independently of the higher-resolution nests.<br />
<br />
The following is an example for the nesting for the Black Saturday Fires:<br />
<br />
{| <br />
! Area !! Resolution !! Start <br />
|-<br />
| Global || || 10am the day before <br />
|-<br />
| Eastern Australia || 4km || 12 noon the day before <br />
|-<br />
| || 1.5km || 2pm the day before <br />
|-<br />
| || 444m || midnight before <br />
|-<br />
| || 144m || midnight <br />
|-<br />
|}<br />
<br />
[[File:black_saturday_fire_nesting.png|800x376px|Nesting]]<br />
<br />
===Coupled/Uncoupled=== <br />
<br />
Coupled in reference to the fire model means whether the fire model feeds back into the atmospheric model (i.e. temperature, moisture) or not. This is set by a flag in the <span style="font-family:monospace">fire.inp</span> file.<br />
<br />
===Model Run Sequence=== <br />
<br />
Black Saturday: Fire start is hard coded as 12 noon 7/2/2009<br />
<br />
{| <br />
! Step !! Map !! fire code !! start !! finish <br />
|-<br />
| 1 || Global || no || 2009-02-06_1000 || 2009-02-08_0000 <br />
|-<br />
| 2 || 4km || no || 2009-02-06_1200 || 2009-02-08_0000 <br />
|-<br />
| 3 || 1.5km || no || 2009-02-06_1400 || 2009-02-08_0000 <br />
|-<br />
| 4 || 444m || no || 2009-02-07_0000 || 2009-02-08_0000 <br />
|-<br />
| 5 || 444m || yes || 2009-02-07_0800 || 2009-02-08_0000 <br />
|-<br />
| 6 || 144m || no || 2009-02-07_0000 || 2009-02-08_0800 <br />
|-<br />
| 7 || 144m || yes || 2009-02-07_0800 || 2009-02-08_0000 <br />
|-<br />
|}<br />
<br />
Note that the 444m nest runs through till the end of the day without fire code to generate boundary conditions for the 144m nest.<br />
The 144m nest without fire only needs to run till 8am, since then the fire-144m will take over.<br />
<br />
===How to run the model===<br />
Apparently CRUNs don't work with the fire model executable. Also, the model run time is varying widely (probably depending on the scale of the fire?), so there's no good assessment on how long a model step will take to complete.<br />
<br />
The model is run manually in this way: Submit an NRUN for the complete time frame, request maximum walltime. Model will probably fail, either because the LBC files run out, or the walltime expires.<br />
Set up a new model run, refer the last restart-dump as the start dump (and adjust the LBC as required), submit it as another NRUN. Rinse and repeat.<br />
<br />
=Glossary=<br />
<br />
{| <br />
| BLST || Black Saturday || 7/2/2009 - Day of devastating bushfires in Victoria <br />
|-<br />
| SRTM || [https://www2.jpl.nasa.gov/srtm/ | Shuttle Radar Topography Mission] || Source of detailed topographic data <br />
|-<br />
|}<br />
<br />
=Notes= <br />
<br />
* Copied <span style="font-family:monospace">varu.s}} into {{vatw.a</span> and converted it into a compile job. (was later deleted)<br />
* Several handedits and user stashmaster files were in <span style="font-family:monospace">~cbe563/um_nesting</span> which was not readable to me. Chermelle changed that for me.<br />
* Extract failed because FCM Extract Directory was set to <span style="font-family:monospace">$LOCALDATA}}, which I haven't set. Fixed that to {{/scratch/users/$USER</span><br />
* Extract failed because <span style="font-family:monospace">UM_SVN_URL}} is set to {{svn://fcm2/UM_svn/UM/trunk}}. I changed that to {{fcm:um_tr</span>, and I hope that's okay.<br />
* scp UMATMOS failed, because it was still trying to use username <span style="font-family:monospace">cbe563</span>. The user details were introduced in lots of hand edit files. Made considerable changes to the handedit files.<br />
* scp UMATMOS failed, as <span style="font-family:monospace">UM_ROUTDIR}} was set to {{/working/nwp/...}} -- changed to {{/short/$PROJECT/$USER/working/...</span><br />
* Extract of JULES failed: <span style="font-family:monospace">!https://access-svn.nci.org.au/svn/jules/trunk@um8.4: revision keyword not defined</span><br />
<br />
I'm giving up on trying to compile the model, I'll start again, this time just executing.<br />
<br />
* Trying again: Copy <span style="font-family:monospace">varu.s}} to {{vatw.b</span>.<br />
* Copy Chermelle's <span style="font-family:monospace">~/um_nesting/vn8.5</span> directory to my own directory.<br />
* Changed values in <span style="font-family:monospace">~/um_nesting/vn8.5/user_details}} and {{~/um_nesting/vn8.5/vanc/expt_details</span> to point to relevant directories, userids, et cetera. Note: Apparently it only works when hard coded.<br />
* Links in <span style="font-family:monospace">~/um_nesting/vn8.5}}: {{exptid -> vatw}} and {{vatw -> vanc</span> (apparently it doesn't work any other way.)<br />
<br />
===Failures=== <br />
<br />
* <span style="font-family:monospace">UM Executable : /short/v46/cbe563/bin/vasfv.exec</span> not readable<br />
* <span style="font-family:monospace">dd: opening `/short/w35/hxw599/fire/BLST/vatw//vatwb/vatwb.out2': No such file or directory</span><br />
<br />
===Compile Executable=== <br />
<br />
* Copied <span style="font-family:monospace">vasf.v}} to {{vatw.a</span>.<br />
* Copied Chermelle's JULES changes to <span style="font-family:monospace">~hxw599/jules/src/jules/src</span></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=PReVIOuS&diff=277PReVIOuS2017-09-19T07:00:49Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>This is a logbook about our attempts to run the PReVIOuS comparison tools.<br />
<br />
==UKMO Documentation== <br />
<br />
https://code.metoffice.gov.uk/doc/previous/latest/introduction.html<br />
<br />
==Our Rose== <br />
<br />
<span style="font-family:monospace">u-ao817</span><br />
<br />
==Status== <br />
<br />
in <span style="font-family:monospace">jinja2}}, one can set the variable {{ENSEMBLE_SIZE</span> to decide how many versions should be run.<br />
<br />
{| <br />
! PART !! WORKS !! ISSUES <br />
|-<br />
| reconf || yes || <br />
|-<br />
| perturb || yes || <br />
|-<br />
| atmos || yes || not all output fields present <br />
|-<br />
|}<br />
<br />
===Output Fields=== <br />
<br />
{| <br />
! stream !! requested !! present !! reason !! STASH code <br />
|-<br />
| pa || U component of wind after timestep || U COMPNT OF WIND AFTER TIMESTEP || || (0, 2) <br />
|-<br />
| pa || V component of wind after timestep || V COMPNT OF WIND AFTER TIMESTEP || || (0, 3) <br />
|-<br />
| pa || Theta after timestep || THETA AFTER TIMESTEP || || (0, 4) <br />
|-<br />
| pa || QFC after timestep || QFC AFTER TIMESTEP || || (0, 12) <br />
|-<br />
| pa || Advected U component of wind after timestep || || not available in this model || (0, 256) <br />
|-<br />
| pa || Advected V component of wind after timestep || || not available in this model || (0, 257) <br />
|-<br />
| pa || Advected W component of wind after timestep || || not available in this model || (0, 258) <br />
|-<br />
| pa || Exner pressure at theta levels || EXNER PRESSURE AT THETA LEVELS || || (0, 406) <br />
|-<br />
| pa || Pressure at Rho levels after timestep || PRESSURE AT RHO LEVELS AFTER TS || || (0, 407) <br />
|-<br />
| pa || Pressure at Theta levels after timestep || PRESSURE AT THETA LEVELS AFTER TS || || (0, 408) <br />
|-<br />
| pa || || SPECIFIC HUMIDITY AFTER TIMESTEP || || (0, 10) <br />
|-<br />
| pb || Surface temperature after timestep || || || (0, 24) <br />
|-<br />
| pb || Surface pressure after timestep || || || (0, 409) <br />
|-<br />
| pb || Surface sensible heat flux || || || (3, 217) <br />
|-<br />
| pb || Surface total moisture flux || || || (3, 223) <br />
|-<br />
| pb || Surface latent heat flux || || || (3, 234) <br />
|-<br />
| pb || Deep soil temperature after Boundary Layer || || || (3, 238) <br />
|-<br />
| pb || Surface net Shoet Wave Radiation on tiles || || || (3,382) <br />
|-<br />
| pb || Surface up Long Wave Radiation on tiles || || || (3, 383) <br />
|-<br />
| pb || Surface down Long Wave Radiation on tiles || || || (3, 384) <br />
|-<br />
| pb || Meridional Momentum flux || || || (30, 316) <br />
|-<br />
|}<br />
<br />
The <span style="font-family:monospace">STASHexport_GA7_AMIP.ini}} file from the PReVIOuS utils directory declares a File Output Stream with the ending {{.pb}} as {{[namelist:nlstcall_pp(1)]</span> but never references it. We have re-enabled the following usage profile:<br />
<br />
<syntaxhighlight><br />
[namelist:use(upb_d50be24f)]<br />
file_id='pp1'<br />
locn=3<br />
!!macrotag=0<br />
use_name='UPB'<br />
</syntaxhighlight><br />
<br />
which is now pointing to the output stream.<br />
<br />
This allows us to re-enable the entry for (0, 409) <span style="font-family:monospace">SURFACE PRESSURE AFTER TIMESTEP</span><br />
<br />
==Update 14/08/2017== <br />
<br />
I have disabled all old STASH requests, as well as all Domain, Time, and Use profiles.<br />
<br />
Then I added the fields as described above, as well as the <span style="font-family:monospace">UPUKCA}} fields from the downloaded {{.ini</span> file (and only those profiles that were mentioned in at least one STASH request).<br />
<br />
This has worked well, except for the three 'advected' fields described above, which have the STASHMaster_A flag <span style="font-family:monospace">n29 = 2</span>, which according to the documentation [[https://code.metoffice.gov.uk/doc/um/vn10.6/papers/umdp_C04.pdf]] means "Excluded if Endgame grid-staggering is being used."<br />
<br />
=The actual comparison= <br />
<br />
==PReVIOuS== <br />
<br />
I've checked out Martin Dix's <span style="font-family:monospace">u-ao414</span> version, as it claimed to be the "NCI version". Though svn logs suggests that the changes he made so far are minimal.<br />
<br />
I've added <span style="font-family:monospace">site/nci_raijin.rc}} and enabled that as an option in the {{Machine Options</span>.<br />
<br />
===Build Resources=== <br />
<br />
The job <span style="font-family:monospace">fmc_make}} fails to run on {{raijin}} as it wants to check out data from the MetOffice's repository, but that would require {{mosrs-auth}}. I've disabled the inheritance of {{RHOST}} for {{fcm_make</span>, so it now runs on accessdev. It's practically an svn checkout only.<br />
<br />
I then manually copied all files to <span style="font-family:monospace">raijin</span><br />
<br />
==Martin's Version== <br />
<br />
When I contacted Martin, he noticed that he had made a few changes to his version of PReVIOuS, and checked them in, but not before I had checked out the original version.<br />
<br />
So I've updated my version, and looked at his changes.<br />
<br />
===keyword.cfg=== <br />
<br />
I needed to copy his <span style="font-family:monospace">~mrd599/.metomi/fcm/keyword.cfg</span> to my directory.<br />
<br />
===Metadata=== <br />
<br />
Martin hadn't bothered to allow for <span style="font-family:monospace">nci_raijin</span> to be an admissable Machine Option, so I kept the changes there.<br />
<br />
===Other configurations=== <br />
<br />
I changed the experiment and control runs both to <span style="font-family:monospace">u-ao817</span>, meaning it should compare the data to itself.<br />
<br />
==Running== <br />
<br />
I'm running the system, but it fails in <span style="font-family:monospace">kstest}} (both {{pa}} and {{pb</span>).<br />
<br />
More careful analysis reveals that the pickle files are annoyingly small (157 bytes) and probably contain nothing of value, so the real culprit is probably the <span style="font-family:monospace">load_data.py</span>.<br />
<br />
It seems that the <span style="font-family:monospace">loaddata.py}} script was looking for the data in {{/short/w35/hxw599/previous/ao-817/}} -- which didn't exist. So I've created a link from {{/short/w35/hxw599/cylc-run/u-ao817/share/data/History_Data</span> to see what happens then.<br />
<br />
The script was falling over the <span style="font-family:monospace">seeddata}} directory, so instead I'm linking only the {{ensemble_*</span> subdirectories.<br />
<br />
This works for <span style="font-family:monospace">pa}}, but now load data fails for {{pb</span>.<br />
<br />
The error seems to lie with the Meridional Momentum flux, which doesn't have a longitude dimension (or, more precisely, has one with length 1).<br />
<br />
For now I've used<br />
<br />
<syntaxhighlight><br />
$ mule-select <infile> <outfile> --exclude lbuser4=30316<br />
</syntaxhighlight><br />
<br />
to remove that field from the <span style="font-family:monospace">pb}} streams. I've also disabled the STASH request in {{u-ao817</span>.<br />
<br />
===Publish=== <br />
<br />
The <span style="font-family:monospace">publish.py}} script either requires a {{WEBDIR}} variable set (which at the moment it isn't, or at least one of two directories {{~/public_html}} or {{~/Public</span> to place the data in.<br />
<br />
==Results from Comparison==<br />
<br />
We've run rose <span style="font-family:monospace">u-ao817</span> on both raijin and the UKMO (10 ensembles each), and the results are not that encouraging:<br />
<br />
* 1 day run: [[https://accessdev.nci.org.au/~hxw599/previous/ao817-ukmo_vs_ao817/]]<br />
* 2h45 run: [[https://accessdev.nci.org.au/~hxw599/previous/ao817_ukmo_vs_css/]]<br />
<br />
To test whether it's an issue with PReVIOuS or the job itself, we split our own run into ensembles 0-4 and 5-9, and compared these results: [[https://accessdev.nci.org.au/~hxw599/previous/ao817_0-4_vs_5-9/]]<br />
<br />
This suggests that the PReVIOuS tool works fine, it's an issue with the run.<br />
<br />
The most likely explanation is that we are not using the same ancillary files as the UKMO is, so we're getting a list of all ancillary files, create md5 hashes, and compare those to the md5 hashes that João uses.<br />
<br />
Found these files:<br />
<br />
{| <br />
! file !! md5(raijin) !! md5(UMKO) <br />
|-<br />
| <span style="font-family:monospace">/g/data1/w35/saw562/HighResMIP/data/d05/hadom/ancil/atmos/n216e_highresmip/orca025/seaice/hadisst2_025/1948-2015/v1/qrclim.seaice</span> || <br />
|-<br />
| <span style="font-family:monospace">/g/data1/w35/saw562/HighResMIP/data/d05/hadom/ancil/atmos/n216e_highresmip/orca025/sst/hadisst2_025/1948-2015/v1/qrclim.sst</span> || <br />
|-<br />
| <span style="font-family:monospace">/g/data1/w35/saw562/HighResMIP/data/d05/hadom/ancil/atmos/n216e_highresmip/ozone/1850-2014/v1/qrclim.ozone_L85_O85</span> || <br />
|-<br />
| <span style="font-family:monospace">/g/data1/w35/saw562/HighResMIP/data/d05/hadom/ancil_creation/soil_moisture/soil_moisture_ac035_to_ab680_usetemplate.anc</span> || <br />
|-<br />
| <span style="font-family:monospace">/g/data1/w35/saw562/HighResMIP/data/d05/hadom/startdumps/era20c_1950010100.grib</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/general_sea/GlobColour/v1/qrclim.sea</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/hydrol_lsh/hydrosheds/v1/qrparm.hydtop</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/land_sea_mask/etop01/v2/qrparm.landfrac</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/land_sea_mask/etop01/v2/qrparm.mask</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/orography/globe30/v7/qrparm.orog</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/rivers_trip/sequence/etopo5/v4/qrparm.rivseq</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/rivers_trip/storage/fekete/v3/qrclim.rivstor</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/soil_parameters/hwsd_vg/v4/qrparm.soil</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/vegetation/fractions_igbp/v4/qrparm.veg.frac</span> || <br />
|-<br />
| <span style="font-family:monospace">/projects/access/umdir/ancil/atmos/n216e/orca025/vegetation/func_type_modis/v4/qrparm.veg.func</span> || <br />
|-<br />
|}</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=TransposeAMIP&diff=334TransposeAMIP2017-08-09T05:57:37Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>This is a log file to note the progress towards a GA7.1 N216 transpose AMIP run.<br />
<br />
==JOB ID== <br />
<br />
<span style="font-family:monospace">u-ao946</span><br />
<br />
==Based on== <br />
<br />
<span style="font-family:monospace">u-an493</span> -- the High Resolution MIP run<br />
<br />
==Basic Idea== <br />
<br />
set <span style="font-family:monospace">ainitial</span> to some auto-generated file depending on the Start Date<br />
<br />
==Initial attempt== <br />
<br />
In the <span style="font-family:monospace">suite.rc}} in {{<nowiki>[runtime] [[root]] [[environment]]</nowiki></span><br />
<br />
Create a new variable called <span style="font-family:monospace">TAMIP_TIME}}, and set it to {{rose date -c -f "%Y-%m-%d-h%H"</span><br />
<br />
<span style="font-family:monospace">-c}} should take the start date, and the format is the same as the one used to distinguish between files in the {{TAMIP}} directory {{/projects/access/AMEL/TransposeAMIPII/ga6/ga6ic/</span><br />
<br />
===Result=== <br />
<br />
<span style="font-family:monospace">ainitial}} was overwritten by {{spinup_1950</span>.<br />
<br />
==Second attempt== <br />
<br />
I've removed both <span style="font-family:monospace">spinup_1950}} and {{n216_1950</span> from the optional parameters.<br />
<br />
===Result=== <br />
<br />
<syntaxhighlight><br />
[FAIL] file:/home/599/hxw599/cylc-run/u-ao946/share/data/etc/um_ancils_gl=source=<br />
/g/data1/w35/saw562/HighResMIP/data/d05/hadom/ancil/data/ancil_versions/n96e_orca025/GA7.1_AMIP/v2/ancils: bad or missing value<br />
</syntaxhighlight><br />
<br />
n216_1950 mode just points to the correct <span style="font-family:monospace">ancil.nci}} file on {{raijin</span>, so I've put that back in.<br />
<br />
This time it ran for a long time, probably because the input files in /projects/access/AMEL/TransposeAMIPII/ga6/ga6ic/ga6tamip2008-10-25-h00 were in N96. Oh, and they're also in UM format, not GRIB.<br />
<br />
I have now created a new <span style="font-family:monospace">jinja2}} variable with the name {{TAMIP_AINITIAL</span> which includes the whole path, including the date.<br />
<br />
===Result=== <br />
<br />
It seems to be working now. That said, the only difference in ancillary files between this run and the 1950 HRMIP run is the initial conditions file, and of course the start date. No changes have been made to sea ice and other external input files.<br />
<br />
That said, many of the files seem to encompass date ranges far into the 2000'<br />
<br />
Also, the initial files are currently taken from <span style="font-family:monospace">/projects/access/AMEL/TransposeAMIPII/ga6/ga6ic/</span>. However, all these files are N96, not N216 -- which means that the reconfiguration regridds them.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS-S&diff=16ACCESS-S2017-06-07T05:52:22Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>=ACCESS-S= <br />
<br />
This is an as-of-yet unlisted document to chart my progress with ACCESS-S.<br />
Once it's running, hopefully this will make it easier to convert it into a document.<br />
<br />
==Getting ACCESS-S== <br />
<br />
This is the initial email I got from Hailin:<br />
<syntaxhighlight><br />
Hi Holger,<br />
<br />
If you'd like to play the ACCESS-S1 suite, you can copy my suite au-aa563.<br />
<br />
The followings are the key parameters to run the suite.<br />
<br />
In rose-suite.conf:<br />
MAKE_BUILDS=true #set true to compile source codes<br />
N_GSHC_MEMBERS=3 #num of ensemble members, as for MEMBERS=-m in app/glosea_init_cntl_file/rose-app.conf<br />
N_GSHC_STEPS=2 #number of RESUBMIT (number of chunk runs)<br />
RESUB_DAYS=1 #number of days per chunk run<br />
<br />
In app/glosea_init_cntl_file/rose-app.conf:<br />
GS_HCST_START_DATE=1990050100 #start date, it is 01 of May in this case<br />
MEMBERS=-m 3 #total number of ensembles, must be the same as the N_GSHC_MEMBERS in rose-suite.conf<br />
GS_YEAR_LIST=1997 #the year of the run<br />
<br />
After you compiled the codes and run the job successfully, you could maintain your own INSTALL_DIR which is defined in suite.rc:<br />
INSTALL_DIR = "/short/dx2/hxy599/gc2-install"<br />
<br />
If you have any problems please let me know.<br />
<br />
Regards,<br />
Hailin<br />
</syntaxhighlight><br />
<br />
So I made a copy of that, new rose is <span style="font-family:monospace">au-aa566</span>, most of the things were already set to the values that Hailin initiated in his email.<br />
I've changed the <span style="font-family:monospace">INSTALL_DIR}} to {{/short/${PROJECT}/${USER}/gc2-install}} but I'm also not a member of the group {{dx2}} or {{ub7}}, so I'm also trying to copy the {{DUMP_DIR}} and {{DUMP_DIR_BOM}} directories to my {{/short/${PROJECT}/${USER}/dump}} and {{/short/${PROJECT}/${USER}/dump-bom</span>, respectively, but there is 27TB of data, and I can't do that.<br />
<br />
==Getting ACCESS-S to run== <br />
<br />
I've copied the job, and just tried to run it, but it failed with error messages, culminating in '''Illegal item: [scheduling]initial cycle time'''<br />
<br />
The solution to this is to use older versions of CYLC and ROSE with this command:<br />
<br />
<syntaxhighlight lang=bash><br />
$ CYLC_VERSION=6.9.1 ROSE_VERSION=2016.06.1 rosie go<br />
</syntaxhighlight><br />
<br />
====First hurdles:==== <br />
<br />
# <span style="font-family:monospace">gsfc_get_analysis</span> gets a '''submit-failed'''<br />
# <span style="font-family:monospace">GSHC_M1-3</span> get '''failed'''<br />
<br />
For now, I've reset the <span style="font-family:monospace">suite.rc</span> to point to the BoM directories, to see whether that changes anything -- It didn't<br />
<br />
Looking at the job activity log and the job itself of <span style="font-family:monospace">gsfc_get_analysis}}, I notice strange PBS directives: {{ConsumableMemory(2GB)}} and {{wall_clock_limit}}. I find these strings in suite.rc, and replace them with {{-l vmem=2GB}} and {{-l walltime=01:11:00}}. (I also find another reference to these Values for {{glosea_joi_prods</span>, and change them as well.)<br />
<br />
This seems to have succeeded for <span style="font-family:monospace">gsfc_get_analysis}}, but the {{GSHC_M1-3</span> still fail. I found this error message:<br />
<br />
<syntaxhighlight><br />
????????????????????????????????????????????????????????????????????????????????<br />
???!!!???!!!???!!!???!!!???!!! ERROR ???!!!???!!!???!!!???!!!???!!!???!!!<br />
? Error Code: 19<br />
? Error Message: Error reading namelist NLSTCALL. Please check input list against code.<br />
? Error from processor: 0<br />
? Error number: 0<br />
????????????????????????????????????????????????????????????????????????????????<br />
</syntaxhighlight><br />
<br />
It seems in the namelist entered, there's a value for <span style="font-family:monospace">control_resubmit}}, which the UM doesn't understand. Since rose considers this variable to be compulsory, I've had to remove it from the file {{~roses/au-aa566/app/coupled/rose-app.conf</span>, and now I've submitted it again. (Or I could have disabled all metadata from the menu option...)<br />
<br />
====Second issues:==== <br />
<br />
<span style="font-family:monospace">gsfc_get_analysis</span> fails at the end, but it seems that it's not doing all that much:<br />
<br />
<syntaxhighlight><br />
======================================================================================<br />
Resource Usage on 2017-06-06 15:39:23:<br />
Job Id: 5497709.r-man2<br />
Project: w35<br />
Exit Status: 1<br />
Service Units: 1.24<br />
NCPUs Requested: 1 NCPUs Used: 1<br />
CPU Time Used: 00:00:03<br />
Memory Requested: 500.0MB Memory Used: 9.56MB<br />
Walltime requested: 01:11:00 Walltime Used: 01:14:15<br />
JobFS requested: 100.0MB JobFS used: 0B<br />
======================================================================================<br />
</syntaxhighlight><br />
<br />
CPU time used is only 3 seconds, while it ran out of walltime after almost 1h15m. <br />
<br />
So it seems that, since <span style="font-family:monospace">SUITE_TYPE}} is set to {{research}} (and thereby {{GS_SUITE_TYPE</span> is also research, some environment variables are set to directories that might exist on the MetOffice computer, but not on raijin:<br />
<br />
<syntaxhighlight><br />
{%- if RUN_GSFC or RUN_GSMN %}<br />
[[gsfc_get_analysis]]<br />
environment scripting = """eval $(rose task-env)<br />
export SHORT_DATE=${ROSE_TASK_CYCLE_TIME%%00}"""<br />
[[environment]]<br />
ANALYSES_DATADIR = ${ROSE_DATAC}/analyses/${ROSE_TASK_CYCLE_TIME}<br />
ROSE_TASK_APP = glosea_get_fcst_analyses<br />
{% if GS_SUITE_TYPE != 'research' %}<br />
FOAM_SUITE_NAME = $(os_get_suiteid --mode=<span style="font-family:monospace"> SUITE_TYPE </span> ocean)<br />
GLOBAL_SUITE_NAME = $(os_get_suiteid --mode=<span style="font-family:monospace"> SUITE_TYPE </span> global)<br />
ROSE_DATAC_GLOBAL = ${ROSE_DATAC/$CYLC_SUITE_NAME/$GLOBAL_SUITE_NAME}<br />
ROSE_DATAC_FOAM = ${ROSE_DATAC/$CYLC_SUITE_NAME/$FOAM_SUITE_NAME}<br />
{% else %}<br />
ROSE_DATAC_GLOBAL = /critical/opfc/suites-oper/global/share/data/${ROSE_TASK_CYCLE_TIME}<br />
ROSE_DATAC_FOAM = /critical/opfc/suites-oper/ocean/share/data/${ROSE_TASK_CYCLE_TIME}<br />
{% endif %}<br />
[[directives]]<br />
-l = "vmem=2GB,walltime=01:11:00"<br />
# resources = ConsumableMemory(2Gb)<br />
# wall_clock_limit = "01:11:00,01:10:00"<br />
</syntaxhighlight><br />
<br />
For now I replaced the <span style="font-family:monospace">else</span> clause above with the same data from as the original and try again.<br />
<br />
====Full Reset====<br />
<br />
Scott noticed that there were some new changes to the configuration file, namely <span style="font-family:monospace">RUN_GSFC}} and {{RUN_GSMN</span> were set to true.<br />
<br />
Since I couldn't remember ever changing them, I just made a full reset, changed and only changed the project.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS-MOSES-KPP&diff=14ACCESS-MOSES-KPP2017-06-06T01:05:59Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>=ACCESS 1.0 with KPP= <br />
<br />
This documentation describes how to run ACCESS1.0 with the MOSES scheme and KPP<br />
<br />
==Prerequisits== <br />
<br />
===UMUI Jobs=== <br />
<br />
It is recommended to copy umuijobs <span style="font-family:monospace">vatae}} for N96 resolution or {{vaski</span> for N48 resolution.<br />
<br />
===KPP executable=== <br />
The best KPP version for this can be checked out from the repository with these commands:<br />
<br />
<syntaxhighlight><br />
svn co https://access-svn.nci.org.au/svn/um/branches/dev/hxw599/kpp-gregorian<br />
cd kpp-gregorian<br />
</syntaxhighlight><br />
<br />
This version is currently set for N48 resolution. If N96 resolution is needed, the file <span style="font-family:monospace">parameter.inc</span> needs to be modified.<br />
<br />
Compilation of the executable can be done with these commands:<br />
<br />
<syntaxhighlight><br />
module use ~access/modules<br />
module load intel-cc/17.0.1.132<br />
module load intel-fc/17.0.1.132<br />
module load openmpi/1.8.2<br />
module load fcm/2014.06.0<br />
module load gcom/3.3<br />
module load dummygrib<br />
module load oasis3-mct-longrun/testing<br />
module load um<br />
module unload netcdf<br />
module load netcdf/4.2.1.1<br />
make oasis3_coupled<br />
</syntaxhighlight><br />
<br />
This will create the <span style="font-family:monospace">KPP_ocean}} executable that the script environment variable {{KPPBIN</span> should point to.<br />
<br />
===Ancillaries===<br />
<br />
The files required to run the KPP module are located on <span style="font-family:monospace">raijin}} at {{/projects/access/data/ancil/kpp-N48}} and {{/projects/access/data/ancil/kpp</span> for N48 and N96, respectively.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.0-N48&diff=23ACCESS1.0-N482017-05-04T04:20:33Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution. This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Owner== <br />
Claudia Frauen and Scott Wales<br />
<br />
==History== <br />
ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Modules of the Experiment== <br />
<What does the Model consist of? Which version of the software and which configuration?><br />
{| <br />
| Atmosphere || UM7.3, as in ACCESS 1.0, but with N48 resolution <br />
|-<br />
| Land Surface || MOSES <br />
|-<br />
| Ocean || Prescribed <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Job ID==<br />
uajcb<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=Kpp-oasis-mct&diff=199Kpp-oasis-mct2015-11-09T06:04:35Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>=Couple KPP to UM using OASIS3-MCT= <br />
<br />
==Changes to KPP== <br />
<br />
<syntaxhighlight lang=diff><br />
+++ Makefile (working copy)<br />
-FFLAGS=-fpp -xHost -O3 -r8 -I. -traceback -fp-model precise<br />
+FFLAGS=-fpp -xHost -O3 -r8 -I. -traceback -fp-model precise -diag-disable 10010<br />
<br />
-OASIS3_LIB=-lpsmile.MPI1 -lmpp_io<br />
+OASIS3_LIB=-lpsmile.MPI1 -lmct -lmpeu -lscrip<br />
<br />
+++ steves_3D_ocn.f (working copy)<br />
- use mod_prism_proto<br />
+ use mod_oasis<br />
<br />
- call prism_get_localcomm_proto(kpp_mpi_comm,mpierr)<br />
+ call oasis_get_localcomm(kpp_mpi_comm,mpierr)<br />
<br />
+++ init_oasis3.f (working copy)<br />
- USE mod_kinds_model<br />
- USE mod_prism_proto<br />
- USE mod_prism_def_partition_proto<br />
- USE mod_prism_put_proto<br />
- USE mod_prism_get_proto<br />
- USE mod_prism_grids_writing<br />
+ USE mod_oasis_kinds<br />
+ USE mod_oasis<br />
<br />
- CALL prism_init_comp_proto(il_comp_id, cp_modnam, ierror)<br />
- IF (ierror .NE. PRISM_Ok) THEN<br />
+ CALL oasis_init_comp(il_comp_id, cp_modnam, ierror)<br />
+ IF (ierror .NE. OASIS_Ok) THEN<br />
WRITE(nuout,*) 'KPP: Received error from ',<br />
- + 'PRISM_Init_Comp_Proto = ',ierror<br />
- CALL prism_abort_proto(il_comp_id,'KPP init_oasis3.f','abort1')<br />
+ + 'OASIS_Init_Comp_Proto = ',ierror<br />
+ CALL oasis_abort(il_comp_id,'KPP init_oasis3.f','abort1')<br />
! Can/should we call MIXED_ABORT here as well?<br />
ELSE<br />
- WRITE(nuout,*) 'KPP: Successful call to PRISM_Init_Comp_Proto'<br />
+ WRITE(nuout,*) 'KPP: Successful call to OASIS_Init_Comp_Proto'<br />
ENDIF<br />
<br />
! Get local communicator<br />
- CALL prism_get_localcomm_proto(il_commlocal,ierror)<br />
- IF (ierror .NE. PRISM_Ok) THEN<br />
+ CALL oasis_get_localcomm(il_commlocal,ierror)<br />
+ IF (ierror .NE. OASIS_Ok) THEN<br />
WRITE(nuout,*) 'KPP: Received error from ',<br />
- + 'PRISM_Get_LocalComm_Proto = ',ierror<br />
+ + 'OASIS_Get_LocalComm_Proto = ',ierror<br />
ELSE<br />
WRITE(nuout,*) 'KPP: Successfully received local communicator'<br />
ENDIF<br />
@@ -81,15 +81,15 @@<br />
<br />
! Define the grids used by KPP (for master processor only)<br />
IF (il_rank .EQ. 0) THEN<br />
- CALL prism_start_grids_writing(il_flag)<br />
+ CALL oasis_start_grids_writing(il_flag)<br />
IF (il_flag .EQ. 1) THEN<br />
! Will we ever need to do this? Do we need to support it?<br />
WRITE(nuout,*) 'KPP: il_flag=1, so we will write ',<br />
- + 'grids for PRISM'<br />
- CALL prism_terminate_grids_writing()<br />
+ + 'grids for OASIS'<br />
+ CALL oasis_terminate_grids_writing()<br />
ELSE<br />
WRITE(nuout,*) 'KPP: il_flag/=1, so we will not write ',<br />
- + 'grids for PRISM'<br />
+ + 'grids for OASIS'<br />
ENDIF<br />
ENDIF<br />
<br />
@@ -103,12 +103,12 @@<br />
il_paral ( clim_offset ) = 0<br />
il_paral ( clim_length ) = NX_GLOBE*NY_GLOBE<br />
<br />
- CALL prism_def_partition_proto(il_part_id,il_paral,ierror)<br />
- IF (ierror.NE.PRISM_Ok) THEN<br />
+ CALL oasis_def_partition(il_part_id,il_paral,ierror)<br />
+ IF (ierror.NE.OASIS_Ok) THEN<br />
WRITE(nuout,*) 'KPP: Received error from ',<br />
- + 'PRISM_Def_Partition_Proto = ',ierror<br />
+ + 'OASIS_Def_Partition_Proto = ',ierror<br />
ELSE<br />
- WRITE(nuout,*) 'KPP: Called PRISM_Def_Partition_Proto'<br />
+ WRITE(nuout,*) 'KPP: Called OASIS_Def_Partition_Proto'<br />
ENDIF<br />
<br />
#ifdef TOYCLIM /* For the OASIS3 toy model - Exchange 1D fields */<br />
@@ -136,15 +136,15 @@<br />
cl_writ(6)='SVNOCEAN'<br />
<br />
DO i=1,jpfldout<br />
- CALL prism_def_var_proto(il_var_id_out(i),cl_writ(i),<br />
- + il_part_id,il_var_nodims,PRISM_Out,il_var_shape,<br />
- + PRISM_Real,ierror)<br />
- IF (ierror.NE.PRISM_Ok) THEN<br />
+ CALL oasis_def_var(il_var_id_out(i),cl_writ(i),<br />
+ + il_part_id,il_var_nodims,OASIS_Out,il_var_shape,<br />
+ + OASIS_Real,ierror)<br />
+ IF (ierror.NE.OASIS_Ok) THEN<br />
WRITE(nuout,*) 'KPP: Received error from ',<br />
- + 'PRISM_Def_Var_Proto = ',ierror,'for variable ',<br />
+ + 'OASIS_Def_Var_Proto = ',ierror,'for variable ',<br />
+ cl_writ(i),' (output field)'<br />
ELSE<br />
- WRITE(nuout,*) 'KPP: Called PRISM_Def_Var_Proto for ',<br />
+ WRITE(nuout,*) 'KPP: Called OASIS_Def_Var_Proto for ',<br />
+ 'variable ',cl_writ(i),' (output field)'<br />
ENDIF<br />
ENDDO<br />
@@ -163,25 +163,25 @@<br />
cl_read(11)='TAUY'<br />
<br />
DO i=1,jpfldin<br />
- CALL prism_def_var_proto(il_var_id_in(i),cl_read(i),<br />
- + il_part_id,il_var_nodims,PRISM_In,il_var_shape,<br />
- + PRISM_Real,ierror)<br />
- IF (ierror.NE.PRISM_Ok) THEN<br />
+ CALL oasis_def_var(il_var_id_in(i),cl_read(i),<br />
+ + il_part_id,il_var_nodims,OASIS_In,il_var_shape,<br />
+ + OASIS_Real,ierror)<br />
+ IF (ierror.NE.OASIS_Ok) THEN<br />
WRITE(nuout,*) 'KPP: Received error from ',<br />
- + 'PRISM_Def_Var_Proto = ',ierror,'for variable',<br />
+ + 'OASIS_Def_Var_Proto = ',ierror,'for variable',<br />
+ cl_read(i),' (input field)'<br />
ELSE<br />
- WRITE(nuout,*) 'KPP: Called PRISM_Def_Var_Proto for ',<br />
+ WRITE(nuout,*) 'KPP: Called OASIS_Def_Var_Proto for ',<br />
+ 'variable ',cl_read(i),' (input field)'<br />
ENDIF<br />
ENDDO<br />
<br />
- CALL prism_enddef_proto(ierror)<br />
- IF (ierror.NE.PRISM_Ok) THEN<br />
+ CALL oasis_enddef(ierror)<br />
+ IF (ierror.NE.OASIS_Ok) THEN<br />
WRITE(nuout,*) 'KPP: Received error from ',<br />
- + 'PRISM_enddef_proto = ',ierror<br />
+ + 'OASIS_enddef = ',ierror<br />
ELSE<br />
- WRITE(nuout,*) 'KPP: Called PRISM_Enddef_Proto'<br />
+ WRITE(nuout,*) 'KPP: Called OASIS_Enddef_Proto'<br />
ENDIF<br />
<br />
RETURN<br />
<br />
+++ couple_io_oasis3.f (working copy)<br />
- USE mod_kinds_model<br />
- USE mod_prism_proto<br />
- USE mod_prism_def_partition_proto<br />
- USE mod_prism_put_proto<br />
- USE mod_prism_get_proto<br />
- USE mod_prism_grids_writing<br />
+ USE mod_oasis_kinds<br />
+ USE mod_oasis<br />
<br />
@@ -28,6 +28,7 @@<br />
#include <times.com><br />
#include <constants.com><br />
#include <initialcon.com><br />
+ include "mpif.h"<br />
c<br />
c Output variables on the KPP regional grid - returned to<br />
c the calling routine (usually <fluxes>).<br />
@@ -64,15 +65,15 @@<br />
- call prism_get_localcomm_proto(kpp_mpi_comm, ierror)<br />
+ call oasis_get_localcomm(kpp_mpi_comm, ierror)<br />
if (ierror .ne. 0) then<br />
- call prism_abort_proto(il_comp_id,<br />
+ call oasis_abort(il_comp_id,<br />
+ 'couple_io_oasis3.f',<br />
+ 'getcomm')<br />
end if<br />
call MPI_Comm_rank(kpp_mpi_comm, kpp_mpi_rank, ierror)<br />
if (ierror .ne. 0) then<br />
- call prism_abort_proto(il_comp_id,'couple_io_oasis3.f','rank')<br />
+ call oasis_abort(il_comp_id,'couple_io_oasis3.f','rank')<br />
end if<br />
- CALL prism_get_proto(il_var_id_in(i),<br />
+ CALL oasis_get(il_var_id_in(i),<br />
<br />
- IF (ierror.NE.PRISM_Ok .and. ierror .LT. PRISM_Recvd) THEN<br />
+ IF (ierror.NE.OASIS_Ok .and. ierror .LT. OASIS_Recvd) THEN<br />
<br />
- + 'PRISM_Get_Proto =',ierror,' receiving variable ',<br />
+ + 'OASIS_Get_Proto =',ierror,' receiving variable ',<br />
<br />
- CALL prism_abort_proto(il_comp_id,'couple_io_oasis3.f',<br />
+ CALL oasis_abort(il_comp_id,'couple_io_oasis3.f',<br />
<br />
@@ -191,15 +192,15 @@<br />
<br />
- USE mod_kinds_model<br />
- USE mod_prism_proto<br />
- USE mod_prism_def_partition_proto<br />
- USE mod_prism_put_proto<br />
- USE mod_prism_get_proto<br />
- USE mod_prism_grids_writing<br />
+ USE mod_oasis_kinds<br />
+ USE mod_oasis<br />
<br />
@@ -418,22 +419,22 @@<br />
<br />
- + 'KPP: Calling PRISM_Put_Proto for variable ' // cl_writ(i) )<br />
- CALL prism_put_proto(il_var_id_out(i),<br />
+ + 'KPP: Calling OASIS_Put_Proto for variable ' // cl_writ(i) )<br />
+ CALL oasis_put(il_var_id_out(i),<br />
<br />
- IF (ierror.NE.PRISM_Ok.and.ierror.LT.PRISM_Sent) THEN<br />
+ IF (ierror.NE.OASIS_Ok.and.ierror.LT.OASIS_Sent) THEN<br />
<br />
- + 'KPP: Received error from PRISM_Put_Proto =',ierror )<br />
+ + 'KPP: Received error from OASIS_Put_Proto =',ierror )<br />
<br />
- CALL prism_abort_proto(il_comp_id,'couple_io_oasis3.f',<br />
+ CALL oasis_abort(il_comp_id,'couple_io_oasis3.f',<br />
<br />
- + 'KPP: Successfully called PRISM_Put_Proto for variable ' //<br />
+ + 'KPP: Successfully called OASIS_Put_Proto for variable ' //<br />
<br />
- USE mod_kinds_model<br />
- USE mod_prism_proto<br />
+ USE mod_oasis_kinds<br />
+ USE mod_oasis<br />
<br />
- + 'Calling prism_terminate_proto(ierror)' )<br />
- CALL prism_terminate_proto(ierror)<br />
+ + 'Calling oasis_terminate(ierror)' )<br />
+ CALL oasis_terminate(ierror)<br />
<br />
- + 'Called prism_terminate_proto(ierror)' )<br />
- IF (ierror .NE. PRISM_Ok) THEN<br />
+ + 'Called oasis_terminate(ierror)' )<br />
+ IF (ierror .NE. OASIS_Ok) THEN<br />
<br />
- + 'PRISM_Terminate_Proto =',ierror,<br />
+ + 'OASIS_Terminate_Proto =',ierror,<br />
<br />
</syntaxhighlight><br />
<br />
==Changes to UM== <br />
<br />
Mainly the configuration build:<br />
<br />
**<span style="font-family:monospace">ummodel/cfg/bld.cfg</span>'''<br />
Add:<br />
<syntaxhighlight><br />
excl_dep USE::mod_oasis_kinds<br />
</syntaxhighlight><br />
and to <span style="font-family:monospace">tool::ldflags}} remove {{-lmpp_io}} and add {{-lmct -lscrip -lmpeu</span><br />
<br />
**<span style="font-family:monospace">umrecon/cfg/bld.cfg</span>'''<br />
to <span style="font-family:monospace">tool::ldflags}} remove {{-lmpp_io}} and add {{-lmct -lscrip -lmpeu</span><br />
<br />
==Current Crash== <br />
<br />
Runtime Crash:<br />
<br />
<syntaxhighlight><br />
NEMO_NPROC CICE_NPROC<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
Version 7.3 template, Unified Model , Non-Operational<br />
Created by UMUI version 7.3<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
PATH used = /short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin:<br />
/short/w35/hxw599/UM_ROUTDIR//vaapb/bin:/projects/access/umdir/vn7.3/normal/scripts:<br />
/projects/access/umdir/vn7.3/normal/exec:/projects/access/umdir/fcm1.4/bin:<br />
/projects/access/umdir/vn7.3/normal/utils:/projects/access/umdir/bin:<br />
/projects/access/umdir/vn7.3/bin:/projects/access/umdir/umui2.0/bin:<br />
/projects/access/bin:/projects/access/umdir/vn7.3/normal/runscripts:<br />
/apps/openmpi/wrapper:/apps/openmpi/1.8.2/bin:/apps/x11vnc/0.9.13/bin:<br />
/opt/bin:/bin:/usr/bin:/opt/pbs/default/bin:/projects/access/bin/:/home/599/hxw599/bin:.<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
Job started at : Mon Nov 9 15:57:08 AEDT 2015<br />
Run started from UMUI<br />
Running from control files in /home/599/hxw599/umui_runs/vaapb-313155649<br />
uamul (collab) - N48 KPP - sea ice<br />
This job is running on machine r76,<br />
using UM directory /projects/access/umdir,<br />
*****'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
Starting script : qsexecute<br />
Starting time : Mon Nov 9 15:57:08 AEDT 2015<br />
*****'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
<br />
KPP using 15 processors<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsexecute: Executing setup<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qssetup: Job terminated normally<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsexecute: Executing dump reconfiguration program<br />
<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
RCF Executable : /short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qxreconf<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsexecute: Executing model run<br />
<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
UM Executable : /short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/vaapb.exe<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
<br />
No OASIS3 angles file will be used.<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
No existing rmp_* file directory specified<br />
Any existing rmp_* files will be removed from<br />
for safety<br />
Generating rmp_* files at run time<br />
NOTE: This will vastly increase your required run time<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''*<br />
cp: cannot stat `/short/w35/hxw599/vaapb/kpp-scripts//namcouple': No such file or directory<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsexecute[888]: /short/w35/hxw599/vaapb/kpp-scripts//kpp_run_pre.ksh: not found [No such file or directory]<br />
readline() on closed filehandle F0 at /short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/OASIS3_kpp line 60.<br />
readline() on closed filehandle F1 at /short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/OASIS3_kpp line 120.<br />
seconds total = 0<br />
oasis_init_comp: Calling MPI_Init<br />
... line repeated to a total of 48 times == number of UM cores<br />
<br />
oasis_init_comp: Not Calling MPI_Init<br />
... line repeated to a total of 15 times == number of KPP cores<br />
<br />
forrtl: error (78): process killed (SIGTERM)<br />
... (snip 47 um sigterms and 15 kpp sigterms)<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsexecute[1165]: /short/w35/hxw599/vaapb/kpp-scripts//kpp_run_post.ksh: not found [No such file or directo<br />
ry]<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
Ending script : qsexecute<br />
Completion code : 1<br />
Completion time : Mon Nov 9 15:57:16 AEDT 2015<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
\n\n\n<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsmaster: Failed in qsexecute in model vaapb<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
Starting script : qsfinal<br />
Starting time : Mon Nov 9 15:57:16 AEDT 2015<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsfinal: Model vaapb - Error: No history files<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
Ending script : qsfinal<br />
Completion code : 135<br />
Completion time : Mon Nov 9 15:57:16 AEDT 2015<br />
*****''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''<br />
\n\n\n<br />
<br />
/short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/bin/qsmaster: failed in final in model vaapb<br />
<<<< Information about How Many Lines of Output follow >>>><br />
73 lines in main OUTPUT file.<br />
0 lines of O/P from pe0.<br />
<<<< Lines of Output Information ends >>>><br />
<br />
'''* * * ''''''* '''''' * * ''''''*<br />
* * * * * * * * * *<br />
* * * * * * * * * *<br />
* * * * * * * * * *<br />
* * * * * '''''' * * *<br />
* * * * * * * * *<br />
* * * * * * * * *<br />
* * * * * * * * *<br />
'''* '''* * * '''* *<br />
<br />
**** '''* ''''''* '''* * * ''''''* '''''' * * ''''''*<br />
* * * * * * * * * * * * * * *<br />
* * * * * * * * * * * * * *<br />
* * * * * * * * * * * * * *<br />
**** * '''''' * * * * * '''''' * * *<br />
** * * * * * * * * * * *<br />
* * * * * * * * * * * * *<br />
* * * * * * * * * * * * * *<br />
* * '''* * '''* '''* * * '''* *<br />
<br />
qsexecute: %RECONA% Atmosphere reconfiguration step<br />
<br />
=====================================================<br />
GCOM Version 3.3<br />
openmpi/1.6.5,intel-fc/12.1.9.293<br />
Using precision : 64bit INTEGERs and 64bit REALs<br />
Built at Thu Aug 29 20:23:42 EST 2013<br />
=====================================================<br />
<br />
Parallel Reconfiguration using 1 processor(s)<br />
divided into a LPG with nproc_x= 1 and nproc_y=<br />
1<br />
<br />
OPEN: Claimed 32000512 Bytes (4000064 Words) for Buffering<br />
OPEN: Buffer Address is F6A41040<br />
CLOSE: File /short/w48/bxp565/ancils/lsm_claudia Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/qrparm.soil Closed on Unit 12<br />
CLOSE: File /short/w48/bxp565/ancils/lsm_claudia Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/qrparm.orog Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/cable_vegfrac_N48.anc Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/cable_vegfunc_N48.anc Closed on Unit 12<br />
CLOSE: File /short/w48/bxp565/ancils/lfrac_claudia Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/qrparm.soil.dust Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/TRIP_riv_store_ancil2 Closed on Unit 12<br />
CLOSE: File /projects/access/data/ancil/access-1.3/N48-cal360/riverrouting_access_v2 Closed on Unit 12<br />
CLOSE: File /short/w35/hxw599/UM_ROUTDIR/hxw599/vaapb/vaapb.astart Closed on Unit 11<br />
CLOSE: File /short/w48/dxd565/UM_ROUTDIR/dxd565/ualdd//um-dump.restart Closed on Unit 10<br />
<br />
* * * * '''* * * ''''''* '''''' * * ''''''*<br />
* * ''' ''' * * * * * * * * * *<br />
* * * * * * * * * * * * * * * *<br />
* * * * * * * * * * * * * * *<br />
* * * * * * * * * '''''' * * *<br />
* * * * * * * * * * * * *<br />
* * * * * * * * * * * * *<br />
* * * * * * * * * * * * *<br />
'''* * * '''* '''* * * '''* *<br />
<br />
USING KPP_PRERUN<br />
<br />
qsexecute: %MODEL% output follows:-<br />
<br />
UMMACHINE = ALTIX<br />
false<br />
USING LINUXMPP<br />
ACCESSRUNCMD -n 48 ./um7.3x : -n 15 ./toyoce<br />
--------------------------------------------------------------------------<br />
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD<br />
with errorcode 0.<br />
<br />
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br />
You may or may not see output from other processes, depending on<br />
exactly when Open MPI kills them.<br />
--------------------------------------------------------------------------<br />
USING KPP_POSTRUN<br />
/short/w35/hxw599/vaapb/kpp-scripts//kpp_run_post.ksh exited with error code 127<br />
0+1 records in<br />
0+1 records out<br />
3787 bytes (3.8 kB) copied, 0.000257916 s, 14.7 MB/s<br />
</syntaxhighlight></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=Glossary&diff=153Glossary2015-09-15T01:38:36Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>This glossary is still a stub. If you want new terms added, go ahead and we will try to give their meaning later.<br />
<br />
{| <br />
| '''Name''' || '''Description''' <br />
|-<br />
| UM || Unified Model, Atmosphere <br />
|-<br />
| ACCESS || Australian Climate Model <br />
|-<br />
| WRF || <br />
|-<br />
| MOMS || <br />
|-<br />
| JULES || <br />
|-<br />
| CABLE || <br />
|-<br />
| HadGEM || <br />
|-<br />
| CMIP || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
| || <br />
|-<br />
|}</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=SCM_with_ERA-Interim&diff=310SCM with ERA-Interim2015-05-11T04:02:47Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>=Single Column Model with ERA-Interim= <br />
<br />
This is a work in progress. I am using this page to keep notes on what I do to get SCM running. I will later rewrite this page into something coherent on how to run the SCM.<br />
<br />
==Compiling the SCM== <br />
<br />
The SCM is compiled on raijin without using the UMUI.<br />
<br />
<syntaxhighlight lang=bash><br />
$ cd /short/$PROJECT/$USER<br />
$ mkdir SCM<br />
$ cd SCM<br />
$ tar xvzf ~access/AccessModelExperimentLibrary/scm_access13/scm_access13_standalone.tgz<br />
$ cd scm_access13_standalone<br />
$ module load intel-fc intel-cc netcdf<br />
$ make -f Makefile.scm<br />
</syntaxhighlight><br />
<br />
==Generating the Namelist== <br />
<br />
If you are running SCM on raijin, you can use a python script which the CMS team has created to ERA-Interim data into the SCM namelist.<br />
You can get it from github:<br />
<br />
<syntaxhighlight lang=bash><br />
$ cd /short/$PROJECT/$USER<br />
$ git clone https://github.com/coecms/era_genesis.git<br />
</syntaxhighlight><br />
<br />
This will create a new subdirectory era_genesis where it will place the scripts and two default input namelists: <span style="font-family:monospace">template.scm}}, which looks just like the final {{namelist.scm}}, and {{base.inp</span>, from which for example the height dimension is read. You are free, and even encouraged, to supply your own namelists, or to modify them as needed.<br />
<br />
Before you can run the script, you first need to load a few modules:<br />
<br />
<syntaxhighlight lang=bash><br />
$ module use ~access/modules<br />
$ module load python pythonlib/netCDF4 pythonlib/f90nml<br />
</syntaxhighlight><br />
<br />
The actual script is called <span style="font-family:monospace">era_genesis.py</span>, and to run it, there are some parameters that you have to, or may, give. <br />
<br />
<syntaxhighlight lang=bash><br />
$ ./era_genesis.py -h<br />
usage: era_genesis.py [-h] [-X LON] [-Y LAT] [-S START_DATE] [-E END_DATE]<br />
[-N NUM] [-b FILE] [-t FILE] [-o FILE] [-s SURFACE_TYPE]<br />
[-d] [-T]<br />
<br />
Cleans up the template file<br />
<br />
optional arguments:<br />
-h, --help show this help message and exit<br />
-X LON, --lon LON longitude<br />
-Y LAT, --lat LAT latitude<br />
-S START_DATE, --start-date START_DATE<br />
start date: YYYYMMDD[HHMM]<br />
-E END_DATE, --end-date END_DATE<br />
end date: YYYYMMDD[HHMM]<br />
-N NUM, --num NUM number of times<br />
-b FILE, --base FILE Base Namelist<br />
-t FILE, --template FILE<br />
Namelist Template<br />
-o FILE, --output FILE<br />
Output Namelist<br />
-s SURFACE_TYPE, --surface_type SURFACE_TYPE<br />
Surface Type: land, sea, or coast<br />
-d, --debug Debug<br />
-T, --test run doctest on this module<br />
</syntaxhighlight><br />
<br />
The most important ones are <span style="font-family:monospace">-X}} and {{-Y}}, giving the longitude and latitude of the location, respectively. Without both of them, the script will not run. You can set a start date, if you don't, it will take the one that is given in the {{base.inp</span> namelist.<br />
The end date is chosen with these priorities: <br />
<br />
# Parameter given with the <span style="font-family:monospace">-E</span> option, if present.<br />
# (<span style="font-family:monospace">NUM}}-1) * 6h after start date, if {{-N</span> present.<br />
# end date from the <span style="font-family:monospace">base.inp</span> file.<br />
<br />
<span style="font-family:monospace">-b}}, {{-t}}, and {{-o}} default to {{base.inp}}, {{template.scm}}, and {{namelist.scm</span> respectively. <br />
<span style="font-family:monospace">-d</span> makes the script produce considerably more output. <br />
<span style="font-family:monospace">-T</span> is just for testing purposes.<br />
<br />
If <span style="font-family:monospace">-s}} is not given, the script will try to determine whether a land, sea, or coast environment is selected by evaluating {{template.scm}}, specifically the values of {{cntlscm -> land_points}}, {{logic -> land_sea_mask}}, {{logic -> soil_mask}}, and {{rundata -> fland_ctile</span>.<br />
If the type of surrounding grid points is at odds with the selected or determined surface type (i.e. surface type sea is selected, but all surrounding points are land), the script will give a warning or even fail.<br />
<br />
If all of this went well, you can jump from here right to '''Running SCM'''.<br />
<br />
==Compile Genesis== <br />
<br />
Note: On raijin, the old genesis is no longer required to convert ERA-Interim data into the namelist that SCM needs. Use era_genesis instead.<br />
<br />
<syntaxhighlight lang=bash><br />
$ cd /short/$PROJECT/$USER<br />
$ git clone https://github.com/coecms/genesis<br />
$ cd genesis/src<br />
$ module load intel-fc intel-cc netcdf<br />
$ make -f Makefile.raijin genesis<br />
</syntaxhighlight><br />
<br />
==Acquiring the Data== <br />
<br />
I just copied it from Greg:<br />
<syntaxhighlight><br />
$ mkdir -p cd /short/$PROJECT/$USER/SCM/Data<br />
$ cd /short/$PROJECT/$USER/SCM/Data<br />
$ cp /short/public/forGreg/era-i/*.nc .<br />
</syntaxhighlight><br />
<br />
==Convert the Data== <br />
<br />
===Rotate field=== <br />
<br />
Greg used xconv and trans to convert.<br />
<br />
I try this:<br />
[[:File:rotate_hemispheres.sh]]<br />
<br />
==Output of Greg's history== <br />
<br />
* XCONV V1.92 16-February-2006<br />
> Transpose grid, rename lat and lon.<br />
> Done<br />
* ncks -O -d initial_time0_hours,0,33 -d latitude,0,60 -d longitude,0,30 tQ_6hrs_pl_2014_01.nc stQ_6hrs_pl_2014_01.nc<br />
> Select a smaller grid.<br />
> Done<br />
* cdo invertlev stQ_6hrs_pl_2014_01.nc istQ_6hrs_pl_2014_01.nc<br />
> Invert Levels<br />
> Done<br />
* ncap -O -s initial_time0_hours=(float(initial_time0_hours)-1875888)/24. /short/p66/glr548/genesis/run_from_era_interim/istQ_6hrs_pl_2014_01.nc adum2.nc<br />
> Transpose time<br />
> Not done<br />
* ncrename -O -d lv_ISBL1,p -v lv_ISBL1,p -d initial_time0_hours,t -v initial_time0_hours,t -v Q_GDS0_ISBL,q adum3.nc adum4.nc<br />
> Rename:<br />
** level -> p<br />
** time -> t<br />
** var -> q<br />
* ncap -O -s q=float(q) adum4.nc adum4aB.nc<br />
> Make q into a float. It is already a float.<br />
* ncap -O -s p=float(p); t=float(t) adum4.nc adum4a.nc<br />
> Make p and t floats. They are already floats.<br />
* ncks -A -x adum4a.nc adum5.nc<br />
> Copy global attributes<br />
* ncks -A -v longitude adum4a.nc adum5.nc<br />
> Copy longitude<br />
* ncks -A -v latitude adum4a.nc adum5.nc<br />
> Copy latitude<br />
* ncks -A -v p adum4a.nc adum5.nc<br />
> Copy pressure level<br />
* ncks -A -v t adum4a.nc adum5.nc<br />
> Copy time<br />
* ncks -A -v q adum4a.nc adum5.nc<br />
> Copy var<br />
<br />
==Running SCM== <br />
<br />
Copy some auxillary data<br />
<br />
<syntaxhighlight lang=bash><br />
$ cp ~access/AccessModelExperimentLibrary/scm_access13/* .<br />
</syntaxhighlight><br />
<br />
Make changes to '''runscm''' (pointing at correct scm.exe), and '''dir_name''' (pointing at the current directory)<br />
<br />
Run the SCM<br />
<br />
<syntaxhighlight lang=bash><br />
$ runscm<br />
</syntaxhighlight><br />
<br />
==Data Output== <br />
<br />
<syntaxhighlight lang=bash><br />
$ scm2nc cable_test.dat cable_test.nc<br />
</syntaxhighlight></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=Introduction_to_UMUI&diff=182Introduction to UMUI2014-10-10T03:48:57Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>==Starting UMUI== <br />
<br />
Runs in the Unified Model are configured using a program called '''UMUI''', for Unified Model User Interface. This program is available on accessdev in two versions: '''umui''' - the default version provided by the Met Office and '''umuix''' - an improved version developed by CAWCR which provides features such as searching for variables.<br />
<br />
==Experiments== <br />
<br />
In the UMUI a single configuration is termed a '''job''', each job is contained in a folder, termed an '''experiment'''. Experiments are identified by a four letter key, normal runs begin with a '''v.''' Older jobs from accesscollab begin with a '''u''' for user jobs, or an '''s''' for standard jobs. The jobs can still be accessed by switching to Collab at the top of the window.<br />
<br />
[[File:umuix_accessdev.png|800x318px]]<br />
<br />
The display of experiments can be filtered both by identifier and the user who created them by using the '''Filter''' button on the main screen (under the Search menu in umui), by default only your own experiments will be shown. You can create your own experiments by pressing the '''New Folder''' button in umuix (or going to Experiment->New in umui) and entering a description for your new experiment.<br />
<br />
To show the list of simulation jobs in an experiment click on the folder icon in the leftmost column. Each simulation has a single letter code, which gets combined with the experiment id to form a job id like '''uahla'''.<br />
<br />
==Simulations== <br />
<br />
Setting up a simulation from scratch is complex and not recommended. Usually you should copy an existing job that is close to what you want and then modify that according to your needs. It is possible to [[change the resolution of jobs]], create[[limited area models from larger regions | limited area models from larger regions]], [[UM Source Control | alter the source code of the UM]] and [[change what fields are output]] among other things, you can talk to the [[mailto:climate_help@nf.nci.org.au | CMS team]] to discuss any requirements you have for simulations.<br />
<br />
Standard N48 runs that the Met Office provides with each UM release are contained in experiment '''saaa''' (on Collab), these are a good place to start for your first run. Copy and paste the run into your own experiment using either the edit menu or right clicking and give it a new name. It's helpful to keep a record of which job the new one originated from, the name is a good place to do that.<br />
<br />
In umui copying a job is more complicated than umuix, you have to select both the original job and the destination experiment and then click Job->Copy. Selections don't go away if you click on a new item, to deselect something you have to click it a second time. Make sure that you only have the job you want to copy selected to avoid confusion.<br />
<br />
=Running a job= <br />
<br />
[[File:umuixjob.png|800x465px|Editing]]<br />
<br />
There are a few things to check before running a new job. Open the job with the '''Edit''' button (File->Open Read Write in umui) and you can see a tree that holds all of the settings for the job. Go to '''User Information and Target Machine->General Details''' by clicking on the folder icons in the left panel and make sure the '''User-id''' is $USER. You should set the Tic/Project code to blank to use your default NCI project, or you can enter a specific project code to use.<br />
<br />
[[File:raijin-umui-general.png]]<br />
<br />
==Compiling== <br />
<br />
We generally recommend that the UM be recompiled for each new experiment, as changing settings can change what code is required. To do this go to '''Compilation and Modifications->Compile Options for the UM Model''' and select the option '''Compile and Build the Named Executable then Stop'''. Press the '''Process''' button to create the scripts required to compile the job, then once that is done you can hit '''Submit''' to extract the source code and submit it to be compiled on Raijin. You should always hit Process after making any configuration changes to make sure the changes are put into the scripts.<br />
<br />
If you're running an ACCESS-based job you'll also need to compile the reconfiguration. See the section on [[Building the UM]] for details on how to set this up.<br />
<br />
Extracting the source code from subversion will take some time, after which you should get a message that the job was submitted and its PBS job id on Vayu. Compilation generally takes around 20 minutes, you can see the job running by logging onto Raijin and using the '''nqstat''' command to see your running jobs. Once this is finished you will find a .comp.leave file in the ~/um_output directory on Raijin with the compiler output, check at the bottom of the file that the compilation ran ok.<br />
<br />
==Running== <br />
<br />
With the executable compiled you can change to the option '''Run from Existing Executable''' in '''Compilation and Modifications->Compile Options for the UM Model'''. Process the job again then hit submit to send the job to Raijin. Once the run is done you will be able to find the output in the folder '''$DATAOUTPUT/$USER/$JOBID'''.<br />
<br />
==Common Errors== <br />
<br />
There are a few common errors that pop up more often than they should.<br />
<br />
===Errors trying to extract=== <br />
<br />
When compiling the UM, the first step is for the UMUI(x) to extract the source code. If an error occurs, check for these things:<br />
<br />
Step 1: Did the error occur during the base, model, or reconfiguration extraction?<br />
Step 2: Check for the extraction log, it is in ~/UM_OUTPUT/<JOBID>/<umbase or ummodel or umrecon>/ext.out<br />
The final lines typically show at which command the error occurred.<br />
<br />
====SVN errors==== <br />
<br />
If you get a permission denied error from an SVN, it might be because svn needed your password, and didn't get it. The solution is to run<br />
<br />
<syntaxhighlight><br />
$ svn ls https://access-svn.nci.org.au/svn/um/branches/<br />
</syntaxhighlight><br />
<br />
If you get asked for a password, you need to supply your NCI password. And then it will ask you whether it should store it in plain text.<br />
As much as I hate to say it: you need to answer 'yes'.<br />
<br />
And immediately afterwards, you have to run the command:<br />
<br />
<syntaxhighlight><br />
$ chmod -R go-rwx $HOME/.subversion<br />
</syntaxhighlight><br />
<br />
====SCP and RSYNC errors==== <br />
<br />
If you get an error like this:<br />
<br />
<syntaxhighlight><br />
mkdir: cannot create directory `/abc123': Permission denied<br />
</syntaxhighlight><br />
<br />
(where abc123 is your username), the most likely explanation is that you haven't set the DATAOUTPUT environment variable on accessdev.<br />
If you are using bash, add this line to your $HOME/.bashrc file:<br />
<br />
<syntaxhighlight><br />
export DATAOUTPUT=/short/${PROJECT}/${USER}/UM_ROUTDIR<br />
</syntaxhighlight><br />
<br />
If you are using tcsh or csh, add this line to your $HOME/.login<br />
<br />
<syntaxhighlight><br />
setenv DATAOUTPUT /short/${PROJECT}/${USER}/UM_ROUTDIR<br />
</syntaxhighlight><br />
<br />
Then log out of accessdev and log back in, open up umuix again, and check whether the error went away.<br />
[[Category:Unified Model]]</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS0.X_Slab_Reosc&diff=19ACCESS0.X Slab Reosc2014-07-30T05:46:56Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
A pre ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution. The SST is simulated with a slab ocean over all open ocean points and additionally in the tropical Pacific with the Recharge Oscillator toy model. This is designed to do long (many years) global climate simulations.<br />
<br />
==Owner== <br />
Yanshan Yu, Scott Wales and [[mailto:dietmar.dommenget@monash.edu | Dietmar Dommenget]]<br />
<br />
==History== <br />
A pre ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution. Additionally the slab ocean and the tropical Pacific with the Recharge Oscillator toy model equations are included in the UM-code to simulate the SST and thermocline depth in the tropical Pacific.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. In open ocean points it simulated SST variability with a slab ocean with a 50m depth and in the tropical Pacific with the Recharge Oscillator toy model to simulate ENSO like variability. A mask can be used to hold the SST at desired regions at prescribed values.<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM 7.3 <br />
|-<br />
| Land Surface || MOSES <br />
|-<br />
| Ocean || Slab Ocean over all open ocean points and the tropical Pacific with the Recharge Oscillator toy model with a mask for prescribing SST at desired locations <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.<br />
<br />
==Job Submission==<br />
* uaovc (accesscollab) for buiding<br />
<br />
==Publications== <br />
* [http://adsabs.harvard.edu/abs/2013AGUFMOS41D1843Y | Yu, Y., D. Dommenget, C. Frauen, G. Wang and S. Wales, 2014: ENSO diversity as a result of the recharge oscillator interacting with the slab ocean. J. Climate , submitted.]</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3_N48_KPP_Ocean&diff=27ACCESS1.3 N48 KPP Ocean2014-07-30T05:45:20Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.3 version of the UM 7.3 changed from N96 to N48 resolution coupled to a single column KPP-ocean model via the OASIS coupler. This is designed to do long (many years) global climate simulations with an upper ocean temperature variability that does not include any lateral interactions in the ocean.<br />
<br />
==Owner== <br />
Dietmar Dommenget, Byju Pookandy and Scott Wales<br />
<br />
==History== <br />
ACCESS1.3 version of the UM 7.3 and OASIS coupled with the MOM ocean model and sea ice model replaced with the KPP-model. Also changed from N96 to N48 resolution.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. In open ocean points it simulated by the vertical single column KPP mixing scheme for vertical mixing processes. The mean temperature and salinity profiles are controlled by flux correction and a weak Newtonian damping for salinity.<br />
<br />
==Modules of the Experiment== <br />
<What does the Model consist of? Which version of the software and which configuration?><br />
{| <br />
| Atmosphere || UM 7.3 <br />
|-<br />
| Land Surface || CABLE <br />
|-<br />
| Ocean || KPP-ocean; single colums; no lateral interactions. <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 64 processors.<br />
<br />
==Job submission== <br />
* ualdh (accesscollab) for building</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS0.X_Slab_Ocean&diff=18ACCESS0.X Slab Ocean2014-07-30T05:44:18Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
A pre ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution with a slab ocean included in the UM-code. This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Owner== <br />
Claudia Frauen, Mike Resny, Scott Wales and [[mailto:dietmar.dommenget@monash.edu | Dietmar Dommenget]]<br />
<br />
==History== <br />
A pre ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution with a slab ocean included in the UM-code.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. In open ocean points it simulated SST variability with a slab ocean with a 50m depth. A mask can be used to hold the SST at desired regions at prescribed values.<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM7.3 <br />
|-<br />
| Land Surface || MOSES <br />
|-<br />
| Ocean || Slab Ocean with mask for prescribing SST at desired locations <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.<br />
<br />
==Job submission== <br />
ualxa (accessdev) for building<br />
<br />
==Publications== <br />
* [http://dx.doi.org/10.1175/JCLI-D-13-00757.1 | Frauen, C., D. Dommenget , M. Rezny and S. Wales, 2014: Analysis of the Non-Linearity of El Nino Southern Oscillation Teleconnections. J. Climate, in press (online).]<br />
* [http://dx.doi.org/10.1007/s00382-014-2154-0 | Wang, G., D. Dommenget and C. Frauen, 2014: An Evaluation of the CMIP3 and CMIP5 simulations in their skill of simulating the spatial structure of SST variability. Climate Dynamics, in press (online).]<br />
* [http://adsabs.harvard.edu/abs/2013AGUFMOS41D1843Y | Yu, Y., D. Dommenget, C. Frauen, G. Wang and S. Wales, 2014: ENSO diversity as a result of the recharge oscillator interacting with the slab ocean. J. Climate , submitted.]<br />
* [http://users.monash.edu.au/~dietmard/papers/tyrrell.et.al.land.sea.contrast.submitted.cdym2013.pdf | Tyrrell, N. L., D. Dommenget , C. Frauen, S. Wales and M. Rezny, 2014: The Influence of Global Sea Surface Temperature Variability on the Large-Scale Land Surface Temperature. Climate Dynamics, submitted.]</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3_N96_Slab_Ocean&diff=29ACCESS1.3 N96 Slab Ocean2014-07-30T05:32:51Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.3 version of the UM 7.3 with a slab ocean included in the UM-code. This is designed to do long (many years) global climate simulations. It is the basis for SST sensitivity studies.<br />
<br />
==Owner== <br />
Claudia Frauen and Scott Wales<br />
<br />
==History== <br />
ACCESS1.3 version of the UM 7.3 changed with a slab ocean included in the UM-code.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. In open ocean points it simulated SST variability with a slab ocean with a 50m depth. A mask can be used to hold the SST at desired regions at prescribed values.<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM7.3 <br />
|-<br />
| Land Surface || CABLE <br />
|-<br />
| Ocean || Slab Ocean with mask for prescribing SST at desired locations <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3_N48_Slab_Ocean&diff=28ACCESS1.3 N48 Slab Ocean2014-07-30T05:30:49Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.3 version of the UM 7.3 changed from N96 to N48 resolution with a slab ocean included in the UM-code. This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Owner== <br />
Claudia Frauen and Scott Wales<br />
<br />
==History== <br />
ACCESS1.3 version of the UM 7.3 changed from N96 to N48 resolution with a slab ocean included in the UM-code.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. In open ocean points it simulated SST variability with a slab ocean with a 50m depth. A mask can be used to hold the SST at desired regions at prescribed values.<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM 7.3 <br />
|-<br />
| Land Surface || CABLE <br />
|-<br />
| Ocean || Slab Ocean with mask for prescribing SST at desired locations <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3_N48&diff=26ACCESS1.3 N482014-07-30T05:16:11Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.3 version of the UM 7.3 changed from N96 to N48 resolution. This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Owner== <br />
Claudia Frauen and Scott Wales<br />
<br />
==History== <br />
Originated from [[ACCESS1.3 AMIP | saaqb]], the N96 version of ACCESS 1.3 AMIP<br />
Ancillaries have been regridded to the new resolution<br />
<br />
Job set up on Vayu by Scott<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM 7.3 <br />
|-<br />
| Land Surface || CABLE 1.8 <br />
|-<br />
| Ocean || Prescribed (AMIP) <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
| Configuration || HadGEM3 (beta) <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.<br />
<br />
==Job submission== <br />
* Accesscollab:<br />
** saaqn (Vayu configuration)<br />
<br />
==Code Repositories== <br />
https://access-svn.nci.org.au/svn/um/branches/dev/saw562/amip/access-1.3/src</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS0.X-N48&diff=20ACCESS0.X-N482014-07-30T05:07:42Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
A pre ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution. This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Owner== <br />
Claudia Frauen, Mike Resny, Scott Wales and [[mailto:dietmar.dommenget@monash.edu | Dietmar Dommenget]]<br />
<br />
==History== <br />
A pre ACCESS1.0 version of the UM 7.3 changed from N96 to N48 resolution.<br />
<br />
==Use Cases== <br />
This is designed to do long (many years) global climate simulations. It is the basis for coupled simulations with simplified ocean components.<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM7.3, but likely slightly different from ACCESS 1.0, , with N48 resolution <br />
|-<br />
| Land Surface || MOSES <br />
|-<br />
| Ocean || Prescribed <br />
|-<br />
| Sea Ice || Prescribed <br />
|-<br />
|}<br />
<br />
==Performance== <br />
On NCI-RAIJIN it computes about (slightly more than) 1yr per 1hr CPU time with 48 processors.<br />
<br />
==Publications== <br />
Tyrrell, N. L., D. Dommenget , C. Frauen, S. Wales and M. Rezny, 2014: The Influence of Global Sea Surface Temperature Variability on the Large-Scale Land Surface Temperature. Climate Dynamics, submitted.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3b_AMIP_Medlyn_stomatal_conductance_in_CABLE&diff=34ACCESS1.3b AMIP Medlyn stomatal conductance in CABLE2014-07-04T01:24:37Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Owner== <br />
Ruth Lorenz, [[mailto:r.lorenz@unsw.edu.au | r.lorenz@unsw.edu.au]]<br />
<br />
==History== <br />
Based on Ruth Lorenz’s ACCESS1.3b uakpe run. Sea surface temperature and sea ice input files for historical period provided by Dan Copsey (Met Office).<br />
Set up on Raijin by Ruth Lorenz and Jatin Kala.<br />
<br />
This experiment is not yet operational.<br />
<br />
==Special Ancillary Files== <br />
* /short/public/rzl561/input_data/sst_amip_1870-2012_n96<br />
* /short/public/rzl561/input_data/seaice_amip_1870-2012_n96<br />
<br />
==Use cases== <br />
<span id="_GoBack"></span>Online test for new CABLE stomatal conductance calculation (Medlyn model instead of Leuning). Contact Jatin Kala ([[mailto:j.kala@unsw.edu.au | j.kala@unsw.edu.au]]) and/or Martin De Kauwe ([[mailto:mdekauwe@gmail.com | <u>mdekauwe@gmail.com</u>]]) for more information.<br />
Modules of the Experiment<br />
{| <br />
| <span style'''"font-family: Times,serif;">Atmosphere</span> || <span style'''"font-family: Times,serif;">UM7.3</span> <br />
|-<br />
| <span style'''"font-family: Times,serif;">Land Surface</span> || <span style'''"font-family: Times,serif;">CABLE 2.0.1</span> <br />
|-<br />
| <span style'''"font-family: Times,serif;">Ocean</span> || <span style'''"font-family: Times,serif;">Prescribed (AMIP)</span> <br />
|-<br />
| <span style'''"font-family: Times,serif;">Configuration</span> || <span style'''"font-family: Times,serif;">HadGEM3 (beta)</span> <br />
|-<br />
|}<br />
<br />
==Performance== <br />
With 8x16 Cores, 1 year is modelled in roughly 2h40m<br />
<br />
==Job submission== <br />
Accessdev: vacya, vacyb, etc<br />
<br />
==Code repositories== <br />
[[https://access-svn.nci.org.au/trac/um/browser/branches/pkg/Rel/ACCESS-1.3_CABLE-2.0_replacement]]<br />
[[https://trac.nci.org.au/trac/cable/browser/branches/Share/CABLE-2.0.1-Tagged-plus-Medlyn-Stom-Param]]<br />
<br />
==Comments== <br />
New cable.nml file necessary to switch between default (Leuning) and Medlyn stomatal resistance model. New def_veg_params.txt file necessary provided with cable code (change in cable.nml).</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3b_AMIP_new_CABLE_hydrology&diff=35ACCESS1.3b AMIP new CABLE hydrology2014-07-04T01:24:10Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Owner== <br />
Ruth Lorenz, [[mailto:r.lorenz@unsw.edu.au | r.lorenz@unsw.edu.au]]<br />
<br />
==History== <br />
Based on Ruth Lorenz’s ACCESS1.3b uakpe run. Sea surface temperature and sea ice input files for historical period provided by Dan Copsey (Met Office).<br />
Set up on Raijin by Ruth Lorenz and Mark Decker.<br />
<br />
This experiment is not yet operational.<br />
<br />
==Special Ancillary Files== <br />
* /short/public/rzl561/input_data/sst_amip_1870-2012_n96<br />
* /short/public/rzl561/input_data/seaice_amip_1870-2012_n96<br />
<br />
==Use cases== <br />
Online test for new CABLE hydrology by Mark Decker<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| <span style'''"font-family: Times,serif;">Atmosphere</span> || <span style'''"font-family: Times,serif;">UM7.3</span> <br />
|-<br />
| <span style'''"font-family: Times,serif;">Land Surface</span> || <span style'''"font-family: Times,serif;">CABLE 2.0</span> <br />
|-<br />
| <span style'''"font-family: Times,serif;">Ocean</span> || <span style'''"font-family: Times,serif;">Prescribed (AMIP)</span> <br />
|-<br />
| <span style'''"font-family: Times,serif;">Configuration</span> || <span style'''"font-family: Times,serif;">HadGEM3 (beta)</span> <br />
|-<br />
|}<br />
<br />
==Performance== <br />
With 8x16 Cores, 1 year is modelled in roughly 2h40m<br />
<br />
==Job submission== <br />
Accesscollab: uaqma<br />
<br />
==Code repositories== <br />
[[https://access-svn.nci.org.au/trac/um/browser/branches/dev/mrd561/r6310_ACCESS-1.3_CABLE-2.0_replacement]]<br />
[[https://trac.nci.org.au/trac/cable/browser/branches/Users/mrd561/CABLE-2.0]]<br />
<br />
==Comment== <br />
New cable.nml file necessary to switch between default and new hydrology as well as added parameters needed for new module</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3b_AMIP&diff=31ACCESS1.3b AMIP2014-07-04T01:23:12Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
HadGEM2 with CABLE2.0<br />
<br />
==Owner== <br />
Ruth Lorenz, [[mailto:r.lorenz@unsw.edu.au | r.lorenz@unsw.edu.au]]<br />
<br />
==History== <br />
Based on Greg Roff's (BoM) ACCESS 1.3 job on Solar, but with CABLE2.0 instead of 1.8. Updated sea surface temperature and sea ice input files to cover longer time period (1950-2011).<br />
Set up on Raijin by Ruth Lorenz with help from Scott Wales and Jhan Srbinovsky. Updated Sea surface temperature and sea ice files provided by Dan Copsey (Met Office),<br />
<br />
==Specific Ancillary files== <br />
/short/public/rzl561/input_data/sst_amip_1870-2012_n96<br />
/short/public/rzl561/input_data/seaice_amip_1870-2012_n96<br />
<br />
==Use Cases== <br />
Control run as benchmark for following runs with CABLE2.0<br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM 7.3 <br />
|-<br />
| Land Surface || CABLE 2.0 <br />
|-<br />
| Ocean || Prescribed (AMIP) <br />
|-<br />
| Configuration || HadGEM3 (beta) <br />
|-<br />
|}<br />
<br />
==Performance== <br />
With 8x16 Cores, 1 year is modelled in roughly 2h40m<br />
<br />
==Job submission== <br />
uakpe (accesscollab)<br />
vacfa (accessdev)<br />
<br />
==Code Repositories== <br />
[[https://access-svn.nci.org.au/trac/um/browser/branches/pkg/Rel/ACCESS-1.3_CABLE-2.0_replacement]]<br />
[[https://trac.nci.org.au/trac/cable/browser/tags/CABLE-2.0]]<br />
<br />
==Publications== <br />
Lorenz, R., Pitman, A. J., Donat, M. G., Hirsch, A. L., Kala, J., Kowalczyk, E. A., Law, R. M., and Srbinovsky, J.: Representation of climate extreme indices in the ACCESS1.3b coupled atmosphere–land surface model, Geosci. Model Dev., 7, 545-567, [http://www.geosci-model-dev.net/7/545/2014/gmd-7-545-2014.html | doi:10.5194/gmd-7-545-2014], 2014.</div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.3_AMIP&diff=24ACCESS1.3 AMIP2014-07-04T01:22:54Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.3 model for the atmospheric model intercomparison project.<br />
<br />
==Owner== <br />
<Name and E-Mail address of person(s) mostly responsible for this experiment><br />
<br />
==History== <br />
Originated from Greg Roff's (BoM) ACCESS 1.3 job on Solar, designed to be an exact replica for Vayu<br />
Note that due to different libraries on Raijin this is no longer an exact match to the Solar runs<br />
<br />
Set up on Vayu/Raijin by Scott<br />
<br />
==Use Cases== <br />
<What is the purpose of the Experiment, what other use cases could you envision?><br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM7.3 <br />
|-<br />
| Land Surface || CABLE 1.8 <br />
|-<br />
| Ocean || Prescribed (AMIP) <br />
|-<br />
| Configuration || HadGEM3 (beta) <br />
|-<br />
|}<br />
<br />
==Performance== <br />
<Just some estimates on how many days/months/years you could model in a certain time frame.><br />
<Obviously you'd have to mention the number of cores for this.><br />
<br />
==Job submission== <br />
<br />
* Accessdev<br />
** vabha<br />
<br />
* Accesscollab<br />
** saaqb (Vayu)<br />
** sabqa<br />
<br />
==Code Repositories== <br />
https://access-svn.nci.org.au/svn/um/branches/pkg/Rel/ACCESS1.3/src<br />
<br />
==Publications== <br />
<Are there any publications that this experiment contributed to? If yes, give us a list.></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=ACCESS1.0_AMIP&diff=21ACCESS1.0 AMIP2014-07-04T01:22:18Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div>[[ListOfExperiments | (Back to List of Experiments)]]<br />
<br />
==Description== <br />
ACCESS1.0 run for atmospheric model intercomparison project<br />
<br />
==Owner== <br />
<Name and E-Mail address of person(s) mostly responsible for this experiment><br />
<br />
==History== <br />
Originated from Greg Roff's (BoM) ACCESS 1.0 job on Solar<br />
<br />
Set up on Vayu by Scott<br />
<br />
==Use Cases== <br />
<What is the purpose of the Experiment, what other use cases could you envision?><br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM7.3 <br />
|-<br />
| Land Surface || MOSES <br />
|-<br />
| Ocean || Prescribed (AMIP) <br />
|-<br />
| Configuration || HadGEM2 <br />
|-<br />
|}<br />
<br />
==Performance== <br />
<Just some estimates on how many days/months/years you could model in a certain time frame.><br />
<Obviously you'd have to mention the number of cores for this.><br />
<br />
==Job submission== <br />
<br />
* Accesscollab<br />
** saaqa (Vayu)<br />
<br />
==Code Repositories== <br />
https://access-svn.nci.org.au/svn/um/branches/dev/han32j/VN7.3-HadGEM2-1.1/src<br />
<br />
==Publications== <br />
<Are there any publications that this experiment contributed to? If yes, give us a list.><br />
==History== <br />
Originated from Greg Roff's (BoM) ACCESS 1.0 job on Solar<br />
<br />
Set up on Vayu by Scott<br />
<br />
==Use Cases== <br />
<What is the purpose of the Experiment, what other use cases could you envision?><br />
<br />
==Modules of the Experiment== <br />
{| <br />
| Atmosphere || UM7.3 <br />
|-<br />
| Land Surface || MOSES <br />
|-<br />
| Ocean || Prescribed (AMIP) <br />
|-<br />
| Configuration || HadGEM2 <br />
|-<br />
|}<br />
<br />
==Performance== <br />
<Just some estimates on how many days/months/years you could model in a certain time frame.><br />
<Obviously you'd have to mention the number of cores for this.><br />
<br />
==Job submission== <br />
<br />
* Accesscollab<br />
** saaqa (Vayu)<br />
<br />
==Code Repositories== <br />
https://access-svn.nci.org.au/svn/um/branches/dev/han32j/VN7.3-HadGEM2-1.1/src<br />
<br />
==Publications== <br />
<Are there any publications that this experiment contributed to? If yes, give us a list.></div>Hwolffhttp://climate-cms.wikis.unsw.edu.au/index.php?title=Oasis_Coupling_with_the_UM&diff=271Oasis Coupling with the UM2014-03-18T04:22:23Z<p>Hwolff: Imported from Wikispaces</p>
<hr />
<div><br />
The UM uses Oasis3 as its coupler to communicate with other models.<br />
<br />
In order to use Oasis3 couplings with the UM you'll need:<br />
<br />
* A list of variables from each model that you want to couple<br />
* Oasis ancillary files '''grids.nc''' and '''masks.nc''' to define grids<br />
* The Oasis configuration file '''namcouple''' for the coupling<br />
<br />
=ACCESS Coupled Variables= <br />
<br />
The variables output by the ACCESS atmosphere model in a CMIP configuration include:<br />
<br />
The variables are processed by the STASH system before they are exported to Oasis, allowing you to send mean fields to the coupler. Oasis can also process fields itself if needed.<br />
<br />
=Ancillary Files= <br />
<br />
Oasis requires a few ancillary files to define the grids that variables are defined on. This makes sure that variables are sent to the correct locations, as well as providing information for interpolation if necessary. Models may be set up to generate these ancillary files themselves, or they may require them to be generated externally.<br />
<br />
Grids have a 4-character identifier, within the files the meaning of variables is identified by a suffix, e.g. 'n48t.lon' is the longitude variable for grid 'n48t'. The number of grid points is NX x NY - grids don't have to be regular, though Oasis expects each grid cell to have 4 corners.<br />
<br />
The ancillary files contain the following variables for each grid used in the model:<br />
<br />
==grids.nc== <br />
<br />
* float GRID.lat(NX, NY): Latitude of every grid point<br />
* float GRID.lon(NX, NY): Longitude of every grid point<br />
* float GRID.clat(4, NX, NY): Latitude of each corner of the grid cell (in a counter-clockwise sense). Optional, only used for some interpolations<br />
* float GRID.clon(4, NX, NY): Longitude of each corner of the grid cell. Like 'clat' this is optional<br />
<br />
==masks.nc== <br />
<br />
* int GRID.msk(NX, NY) : Coupling mask for the grid. Values > 0 indicate [coupled or uncoupled] points, while values = 0 are [the opposite]<br />
<br />
='''namcouple''' File= <br />
<br />
The '''namcouple''' file is Oasis' configuration file, it defines which fields are sent to which model.<br />
<br />
This file can be rather fiddly to get right, if there are any errors the run will crash. It can be helpful to run Oasis stand-alone to check it's working (e.g. just run oasis3.MPI1.x without mpirun). Totalview can also help to check what's happening when it reads the file.<br />
<br />
Comments in the file begin with a hash (<span style="font-family:monospace">#</span>). Significant lines should begin with a space character, blank lines are not allowed. The namcouple file is split up into a number of sections, whose names are in all-caps and begin with a dollar sign.<br />
<br />
A sample namcouple file follows, with explanation below:<br />
<syntaxhighlight><br />
# Max SEQ value<br />
$SEQMODE<br />
1<br />
#<br />
# MPI information<br />
$CHANNEL<br />
# MPI version 1 & unbuffered sends<br />
MPI1 NOBSEND<br />
# nprocs, coupled procs & args<br />
1 1 a<br />
1 1 b<br />
#<br />
# Number of fields to couple<br />
$NFIELDS<br />
2<br />
#<br />
# Name of the job<br />
$JOBNAME<br />
TEST<br />
#<br />
# Submodel count, names and output streams<br />
$NBMODEL<br />
2 a b 100 100<br />
#<br />
# Number of seconds to run for<br />
$RUNTIME<br />
2<br />
#<br />
# Starting date<br />
$INIDATE<br />
10000101<br />
#<br />
# Info in binary files<br />
$MODINFO<br />
NOT<br />
#<br />
# Max log level<br />
$NLOGPRT<br />
2<br />
#<br />
# Gregorian calendar<br />
$CALTYPE<br />
1<br />
#<br />
# Coupled fields<br />
$STRINGS<br />
Bbt Abt 1 1 4 rest.nc EXPOUT<br />
n96t n48t<br />
P 0 P 0<br />
LOCTRANS CHECKIN SCRIPR CHECKOUT<br />
INSTANT<br />
INT=0<br />
CONSERV LR SCALAR LATLON 10 FRACAREA FIRST<br />
INT=0<br />
Aat Bat 1 1 4 rest.nc EXPOUT<br />
n48t n96t<br />
P 0 P 0<br />
LOCTRANS CHECKIN SCRIPR CHECKOUT<br />
INSTANT<br />
INT=0<br />
CONSERV LR SCALAR LATLON 10 FRACAREA FIRST<br />
INT=0<br />
</syntaxhighlight><br />
<br />
==$SEQMODE== <br />
<br />
Oasis allows you to explicitly order communications by giving them a sequence index, otherwise communication has to happen in the order that fields are defined in the namcouple file. The value here is the maximum sequence number used.<br />
<br />
==$CHANNEL== <br />
<br />
=CMIP Fields= <br />
namcouple file /short/w97/ACCESS1.3_EXP_CMIP5/work/um_coupled/hPI-c01a/CPL_RUNDIR/namcouple<br />
==atm -> ice== <br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| heatflux || thflx_i || 5 || surface_downward_heat_flux || W m-2 <br />
|-<br />
| pen_sol || pswflx_i || 38 || surface_net_downward_shortwave_flux || W m-2 <br />
|-<br />
| runoff || runoff_i || 32 || water_flux_into_ocean_from_rivers || kg m-2 s-1 <br />
|-<br />
| wme || wme_i || 37 || surface_energy_flux_into_ocean_due_to_wind_mixing || W m-2 <br />
|-<br />
| train || rain_i || 27 || rainfall_flux || kg m-2 s-1 <br />
|-<br />
| tsnow || snow_i || 40 || snow_fall_flux || kg m-2 s-1 <br />
|-<br />
| evap2d || evap_i || 25 || water_evaporation_flux || 1 <br />
|-<br />
| lhflx || lhflx_i || 355 || surface_downward_latent_heat_flux || W m-2 <br />
|-<br />
| tmlt01 || tmlt01_i || 467 || no_467_unknown_field_currently || xxx <br />
|-<br />
| tmlt02 || tmlt02_i || 467 || no_467_unknown_field_currently || xxx <br />
|-<br />
| tmlt03 || tmlt03_i || 467 || no_467_unknown_field_currently || xxx <br />
|-<br />
| tmlt04 || tmlt04_i || 467 || no_467_unknown_field_currently || xxx <br />
|-<br />
| tmlt05 || tmlt05_i || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| bmlt01 || bmlt01_i || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| bmlt02 || bmlt02_i || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| bmlt03 || bmlt03_i || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| bmlt04 || bmlt04_i || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| bmlt05 || bmlt05_i || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| taux || taux_i || 23 || surface_downward_grid_eastward_stress || Pa <br />
|-<br />
| tauy || tauy_i || 24 || surface_downward_grid_northward_stress || Pa <br />
|-<br />
| swflx || swflx_i || 367 || surface_net_downward_shortwave_flux || W m-2 <br />
|-<br />
| lwflx || lwflx_i || 366 || surface_net_downward_longwave_flux || W m-2 <br />
|-<br />
| shflx || shflx_i || 362 || surface_downwelling_shortwave_flux || W m-2 <br />
|-<br />
| press || press_i || 33 || surface_air_pressure || Pa <br />
|-<br />
|}<br />
==ice -> ocn== <br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| strsu_io || u_flux || 170 || downward_eastward_stress_at_sea_ice_base || Pa <br />
|-<br />
| strsv_io || v_flux || 175 || downward_northward_stress_at_sea_ice_base || Pa <br />
|-<br />
| rain_io || lprec || 27 || rainfall_flux || kg m-2 s-1 <br />
|-<br />
| snow_io || fprec || 28 || snow_fall_flux || kg m-2 s-1 <br />
|-<br />
| stflx_io || salt_flx || 454 || water_flux_into_ocean || kg m-2 s-1 <br />
|-<br />
| htflx_io || mh_flux || 42 || downward_heat_flux_in_sea_ice || W m-2 <br />
|-<br />
| swflx_io || sw_flux || 367 || surface_net_downward_shortwave_flux || W m-2 <br />
|-<br />
| qflux_io || q_flux || 452 || water_evaporation_flux || kg m-2 s-1 <br />
|-<br />
| shflx_io || t_flux || 362 || surface_downwelling_shortwave_flux || W m-2 <br />
|-<br />
| lwflx_io || lw_flux || 366 || surface_net_downward_longwave_flux || W m-2 <br />
|-<br />
| runof_io || runof || 297 || runoff_flux || kg m-2 s-1 <br />
|-<br />
| press_io || p || 33 || surface_air_pressure || Pa <br />
|-<br />
| aice_io || aice || 44 || sea_ice_area_fraction || 1 <br />
|-<br />
|}<br />
==ocn -> ice== <br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| t_surf || sst_i || 1 || sea_surface_temperature || K <br />
|-<br />
| s_surf || sss_i || 312 || sea_surface_salinity || 1e-3 <br />
|-<br />
| u_surf || ssu_i || 181 || eastward_sea_water_velocity || m s-1 <br />
|-<br />
| v_surf || ssv_i || 261 || northward_sea_water_velocity || m s-1 <br />
|-<br />
| frazil || pfmice_i || 441 || upward_sea_ice_basal_heat_flux || W m-2 <br />
|-<br />
| dssldx || sslx_i || 203 || height || m <br />
|-<br />
| dssldy || ssly_i || 310 || sea_surface_elevation || m <br />
|-<br />
|}<br />
==ice -> atm== <br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| isst_ia || ocn_sst || 1 || sea_surface_temperature || K <br />
|-<br />
| icecon01 || ofrzn01 || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| icecon02 || ofrzn02 || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| icecon03 || ofrzn03 || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| icecon04 || ofrzn04 || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| icecon05 || ofrzn05 || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| snwthk01 || osnwtn01 || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| snwthk02 || osnwtn02 || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| snwthk03 || osnwtn03 || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| snwthk04 || osnwtn04 || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| snwthk05 || osnwtn05 || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| icethk01 || ohicn01 || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| icethk02 || ohicn02 || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| icethk03 || ohicn03 || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| icethk04 || ohicn04 || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| icethk05 || ohicn05 || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| uvel_ia || sunocean || 47 || surface_grid_eastward_sea_water_velocity || m s-1 <br />
|-<br />
| vvel_ia || svnocean || 47 || surface_grid_eastward_sea_water_velocity || m s-1 <br />
|-<br />
|}<br />
=KPP Fields= <br />
/data/projects/access/ancil/kpp/namcouple<br />
==atm -> ocn== <br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| heatflux || HEATFLUX || 5 || surface_downward_heat_flux || W m-2 <br />
|-<br />
| solar || SOLAR || 38 || surface_net_downward_shortwave_flux || W m-2 <br />
|-<br />
| runoff || RUNOFF || 32 || water_flux_into_ocean_from_rivers || kg m-2 s-1 <br />
|-<br />
| wme || WME || 37 || surface_energy_flux_into_ocean_due_to_wind_mixing || W m-2 <br />
|-<br />
| train || TRAIN || 27 || rainfall_flux || kg m-2 s-1 <br />
|-<br />
| tsnow || TSNOW || 40 || snow_fall_flux || kg m-2 s-1 <br />
|-<br />
| evap2d || EVAP2D || 25 || water_evaporation_flux || 1 <br />
|-<br />
| lhflx || LHFLX || 355 || surface_downward_latent_heat_flux || W m-2 <br />
|-<br />
| tmlt01 || TMLT01 || 467 || no_467_unknown_field_currently || xxx <br />
|-<br />
| bmlt01 || BMLT01 || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
| taux || TAUX || 23 || surface_downward_grid_eastward_stress || Pa <br />
|-<br />
| tauy || TAUY || 24 || surface_downward_grid_northward_stress || Pa <br />
|-<br />
|}<br />
<br />
==ocn -> atm== <br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| OCN_SST || ocn_sst || 1 || sea_surface_temperature || K <br />
|-<br />
| OFRZN01 || ofrzn01 || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| OSNWTN01 || osnwtn01 || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| OHICN01 || ohicn01 || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| SUNOCEAN || suncoean || 47 || surface_grid_eastward_sea_water_velocity || m s-1 <br />
|-<br />
| SVNOCEAN || svnocean || 48 || surface_grid_northward_sea_water_velocity || m s-1 <br />
|-<br />
|}<br />
=CICE/KPP= <br />
<br />
==ice->ocn== <br />
<br />
{| <br />
! Source !! Dest !! CF Index !! CF Name !! Unit <br />
|-<br />
| strsu_io || TAUX || 170 || downward_eastward_stress_at_sea_ice_base || Pa <br />
|-<br />
| strsv_io || TAUY || 175 || downward_northward_stress_at_sea_ice_base || Pa <br />
|-<br />
| rain_io || TRAIN || 27 || rainfall_flux || kg m-2 s-1 <br />
|-<br />
| snow_io || || 28 || snow_fall_flux || kg m-2 s-1 <br />
|-<br />
| stflx_io || || 454 || water_flux_into_ocean || kg m-2 s-1 <br />
|-<br />
| htflx_io || || 42 || downward_heat_flux_in_sea_ice || W m-2 <br />
|-<br />
| swflx_io || SOLAR || 367 || surface_net_downward_shortwave_flux || W m-2 <br />
|-<br />
| qflux_io || || 452 || water_evaporation_flux || <span style="background-color: #ff0000;">kg m-2 s-1</span> <br />
|-<br />
| shflx_io || || 362 || surface_downwelling_shortwave_flux || W m-2 <br />
|-<br />
| lwflx_io || || 366 || surface_net_downward_longwave_flux || W m-2 <br />
|-<br />
| runof_io || || 297 || runoff_flux || kg m-2 s-1 <br />
|-<br />
| press_io || || 33 || surface_air_pressure || Pa <br />
|-<br />
| aice_io || || 44 || sea_ice_area_fraction || 1 <br />
|-<br />
| || HEATFLUX || 5 || surface_downward_heat_flux || W m-2 <br />
|-<br />
| || RUNOFF || 32 || water_flux_into_ocean_from_rivers || kg m-2 s-1 <br />
|-<br />
| || WME || 37 || surface_energy_flux_into_ocean_due_to_wind_mixing || W m-2 <br />
|-<br />
| || TSNOW || 40 || snow_fall_flux || kg m-2 s-1 <br />
|-<br />
| || EVAP2D || 25 || water_evaporation_flux || <span style="background-color: #ff0000;">1</span> <br />
|-<br />
| || LHFLX || 355 || surface_downward_latent_heat_flux || W m-2 <br />
|-<br />
| || TMLT01 || 467 || no_467_unknown_field_currently || xxx <br />
|-<br />
| || BMLT01 || 466 || no_466_unknown_field_currently || xxx <br />
|-<br />
|}<br />
<br />
==ocn->ice== <br />
<br />
{| <br />
| OCN_SST || sst_i || 1 || sea_surface_temperature || K <br />
|-<br />
| OFRZN01 || || 31 || sea_ice_area_fraction || 1 <br />
|-<br />
| OSNWTN01 || || 373 || surface_snow_amount || kg m-2 <br />
|-<br />
| OHICN01 || || 468 || no_468_unknown_field_currently || xxx <br />
|-<br />
| SUNOCEAN || ssu_i || 47 || surface_grid_eastward_sea_water_velocity || m s-1 <br />
|-<br />
| SVNOCEAN || ssv_i || 48 || surface_grid_northward_sea_water_velocity || m s-1 <br />
|-<br />
| || sss_i || 312 || sea_surface_salinity || 1e-3 <br />
|-<br />
| || pfmice_i || 441 || upward_sea_ice_basal_heat_flux || W m-2 <br />
|-<br />
| || sslx_i || 203 || height || m <br />
|-<br />
| || ssly_i || 310 || sea_surface_elevation || m <br />
|-<br />
|}<br />
<br />
==See also== <br />
[http://www.prism.enes.org/Publications/Documentation/Environments/Coupling/oasis3doc_Rep2/oasis3doc.html | Oasis 3 documentation at ENES]</div>Hwolff