NU-WRF set the environment

Revision as of 22:51, 2 December 2015 by ClaireCarouge (talk | contribs) (Imported from Wikispaces)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

flat

Access to the LIS dataset


LIS comes with its own dataset for static data (topography, vegetation and soil types, etc) and some time-varying datasets (LAI, etc.). This dataset is stored at NCI but as LIS it is under license. In order to access it, you will need to ask access to it by emailing [| climate_help]. Access is granted to all staff or student working at one of the ARCCSS participating Universities and the ARCCSS partner organisations.

Setup command restricted ssh-keys


    • If you are using a storage machine outside NCI, you will need to transfer data to and from this machine automatically. If you are only using NCI machine, please skip this part! The transfers will be done using the copyq queue of the scheduler on raijin which means that an ssh-agent forwarding can not be used (it is disabled for the queues). Hence to enable a password-less transfer, you will need to create a specific pair of ssh keys. This pair is special as it has an empty passphrase but is protected by limiting its usage to file transfers only. Note: this method requires that your storage machine has rrsync installed.

Choose an archiving folder

The rrsync command on raijin has been configured to refuse to move back up the folder tree on the remote host. That is you can not use "../" in the path to your data storage. You can also only give relative paths and not absolute paths. It is then important to carefully choose the default path for rrsync. If you do not give a path, rrsync will by default connect to your home directory on the remote machine which is usually small and not used for storage. So you want rrsync to connect to the root of your storage area. Once you have chosen the path, please add it as an environment variable named ${RSYNC_PATH} (do not change the name) on your storage machine. To do that:

  • If you use bash shell, add to your ~/.bashrc file:

>

export RSYNC_PATH=<path you have chosen>
  • If you use tcsh shell, add to your ~/.cshrc file:

>

setenv RSYNC_PATH <path you have chosen>

<path you have chosen> is to be replaced by the path you want. For UNSW, we suggest using /srv/ccrc. Then when using rsync you only give the path relative to ${RSYNC_PATH}.

Setup the keys

This is explained in details by NCI on | this page. On the documentation:

  • "MyDesktop" refers to your storage machine
  • /data/archive is to be replaced by ${RSYNC_PATH}.

How to use the ssh keys

Transfer from raijin to storage machine

Let's say your storage machine is called: maelstrom.ccrc.unsw.edu.au and your login on this machine is z3368490. You want to transfer the EXPDIR/ directory from raijin to maelstrom in the directory: ${RSYNC_PATH}/TEST/. You first have to create the ${RSYNC_PATH}/TEST/ directory on maelstrom. Then on raijin, type:

rsync -vrlpt ./EXPDIR -e “ssh -i $HOME/.ssh/id_rsa_file_transfer” z3368490@maelstrom.ccrc.unsw.edu.au:TEST/.

Note the destination path is only TEST/ since ${RSYNC_PATH} is specified in the restricted command in authorized_keys file and there is no space between “TEST/” and “.”. You need a space before z3368490. The part -e “ssh -i $HOME/.ssh/id_rsa_file_transfer” has to be just before the remote path so the name of the directory on raijin is before the “-e” option! The “./”before EXPDIR is not requested but it is highly recommended to use it.

Transfer from storage machine to raijin

Now, if you want to get the ${RSYNC_PATH}/TEST/EXPDIR from maelstrom to raijin, you still need to be on raijin and type:

rsync -vrlpt -e "ssh -i $HOME/.ssh/id_rsa_file_transfer" z3368490@maelstrom.ccrc.unsw.edu.au:TEST/EXPDIR .

Again there is a space before z3368490. Note the destination directory on raijin is at the end (the last “.”)

Modules and environment variables


To run NU-WRF you need to load, on raijin and the storage machine, the modules to access the Fortran compiler, OpenMPI (for parallelisation), ESMF v3.1.0rp3, SZIP and the netCDF libraries. These modules are loaded by default in the scripts provided for compiling and running the code. In addition, some scripts are written in Python (v2 not v3) and thus the latest Python 2 library should be loaded on the storage machine. The codes are under version control using Git so you will need to make sure Git is installed on your machine. It is accessible by default on raijin.

You also need to define a few environment variables for LIS and WRF. The following tables give the environment variables for bash and csh shells on different machines. If you are a member of this wiki, feel free to add information for an other machine.

Notes:

  • NU-WRF has only been tested on raijin using the Intel Fortran compiler. If you plan on using an other compiler you are responsible for modifying the environment variables and scripts as necessary.
  • If you are using raijin to run WPS as well as WRFV3, you need to load the additional modules listed on CCRC machines to raijin (Python and SZIP).
!! On Raijin
bash shell, in file: .bashrc csh/tcsh shells, in file: .cshrc
export LIS_ARCHlinux_ifc setenv LIS_ARCH "linux_ifc" This variable is for LIS. ifc stands for Intel Fortran Compiler
ulimit -s unlimited limit stacksize unlimited For WRF
export WRFIO_NCD_LARGE_FILE_SUPPORT1 setenv WRFIO_NCD_LARGE_FILE_SUPPORT1 Not necessary but highly recommended. Everything on one line.
export WRF_ESMF_LIBS_MPI"-L/apps/esmf/3.1.0rp3-intel/lib/lib0/Linux.intel.64.openmpi.default -lesmf -lstdc++ -lrt" setenv WRF_ESMF_LIBS_MPI "-L/apps/esmf/3.1.0rp3-intel/lib/lib0/Linux.intel.64.openmpi.default -lesmf -lstdc++ -lrt" Note the path is machine dependant. Everything on one line.
!! On CCRC machines
bash shell, in file: .bashrc csh/tcsh shells, in file: .cshrc
export LIS_ARCHlinux_ifc setenv LIS_ARCH "linux_ifc" This variable is for LIS. ifc stands for Intel Fortran Compiler
ulimit -s unlimited limit stacksize unlimited For WRF
export WRFIO_NCD_LARGE_FILE_SUPPORT1 setenv WRFIO_NCD_LARGE_FILE_SUPPORT1 Not necessary but highly recommended. Everything on one line.
export WRF_ESMF_LIBS_MPI"-L/share/apps/esmf/intel/ 3.1.0rp3/lib/lib0/Linux.intel.64.openmpi.default -lesmf -lstdc++ -lrt" setenv WRF_ESMF_LIBS_MPI "-L/share/apps/esmf/intel/ 3.1.0rp3/lib/lib0/Linux.intel.64.openmpi.default -lesmf -lstdc++ -lrt" Note the path is machine dependant. Everything on one line.