Difference between revisions of "How to run WRF"

Line 7: Line 7:
 
*When the tutorial says to run the model see below for local run scripts  
 
*When the tutorial says to run the model see below for local run scripts  
  
== Data for WRF ==
+
= Data for WRF =
 
'''You need to ask for membership to the sx70 project [https://my.nci.org.au/mancini/project/sx70 here]'''
 
'''You need to ask for membership to the sx70 project [https://my.nci.org.au/mancini/project/sx70 here]'''
=== Geographical data ===
+
== Geographical data ==
 
The geographical data has been redownloaded to include all the data available from [http://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html NCAR] except the NMM data, this includes data only available over the US.
 
The geographical data has been redownloaded to include all the data available from [http://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html NCAR] except the NMM data, this includes data only available over the US.
  
Line 20: Line 20:
 
* For V4: '''/g/data/sx70/data/WPS_GEOG_20190418'''
 
* For V4: '''/g/data/sx70/data/WPS_GEOG_20190418'''
  
=== Meteorological data for the tutorial ===
+
== Meteorological data for the tutorial ==
 
The data needed to run the [http://www2.mmm.ucar.edu/wrf/OnLineTutorial/CASES/JAN00/ungrib.htm January 2000 tutorial case] is stored under:
 
The data needed to run the [http://www2.mmm.ucar.edu/wrf/OnLineTutorial/CASES/JAN00/ungrib.htm January 2000 tutorial case] is stored under:
 
* For V3 of WRF: '''/g/data/sx70/data/JAN00_v3'''
 
* For V3 of WRF: '''/g/data/sx70/data/JAN00_v3'''
 
* For V4 of WRF: '''/g/data/sx70/data/JAN00_v4'''
 
* For V4 of WRF: '''/g/data/sx70/data/JAN00_v4'''
  
== Run scripts for WPS, real.exe and wrf.exe ==
+
= Run scripts for WPS, real.exe and wrf.exe =
  
 
Example scripts to submit WPS, real.exe and wrf.exe to the queues (do not try to run these on the login nodes) are provided under '''WPS''' and '''WRFV3/run'''. The files are named '''run_WPS.sh''', '''run_real''' and '''run_mpi'''. Those scripts allow you to run WRF from any filesystem independently from the location of the source code. In particular, you can copy '''WRFV3/run''' on /scratch as well as the inputs needed for WPS and then run from /scratch (which is recommended). Those files are good to use as-is for the tutorial. For other configurations, feel free to configure those to your needs.
 
Example scripts to submit WPS, real.exe and wrf.exe to the queues (do not try to run these on the login nodes) are provided under '''WPS''' and '''WRFV3/run'''. The files are named '''run_WPS.sh''', '''run_real''' and '''run_mpi'''. Those scripts allow you to run WRF from any filesystem independently from the location of the source code. In particular, you can copy '''WRFV3/run''' on /scratch as well as the inputs needed for WPS and then run from /scratch (which is recommended). Those files are good to use as-is for the tutorial. For other configurations, feel free to configure those to your needs.
 +
 +
= Running with OpenMP =
 +
For some configurations, it might be advantageous to run with OpenMP in addition to OpenMPI or OpenMP alone for some idealised cases. It is impossible to know beforehand whether OpenMP will improve or deteriorate performances. You will need to try your simulation on a short time to see the effect.
 +
 +
By default, the compilation is done for OpenMPI and OpenMP so you don't need to recompile. Below is an example of an mpirun command with OpenMP as well:
 +
<code>
 +
export OMP_NUM_THREADS=2
 +
NCORE=$PBS_NCPUS/$OMP_NUM_THREADS
 +
mpirun -np $NCORE --map-by node:PE=$OMP_NUM_THREADS --rank-by core ./wrf.exe

Revision as of 00:44, 31 March 2020

NCAR provides a detailed tutorial on how to run WRF, but note:

  • Skip the configure and compile steps of the NCAR tutorial and go straight to the Basics tab. You should have followed the NCI-specific installation instructions for building the model at NCI before attempting to run the model
  • Skip the geogrid, ungrib, and metgrid steps as these are included and compiled during the installation step
  • To run the January 2000 tutorial case do not download the metgrid data; it is available at NCI, see below
  • When the tutorial says to run the model see below for local run scripts

Data for WRF

You need to ask for membership to the sx70 project here

Geographical data

The geographical data has been redownloaded to include all the data available from NCAR except the NMM data, this includes data only available over the US.

The new complete datasets are under:

  • For V3: /g/data/sx70/data/WPS_GEOG_v3
  • For V4: /g/data/sx70/data/WPS_GEOG_v4

The previous incomplete datasets are also present to ensure you can reproduce current inputs:

  • For V3: /g/data/sx70/data/WPS_GEOG_20180313
  • For V4: /g/data/sx70/data/WPS_GEOG_20190418

Meteorological data for the tutorial

The data needed to run the January 2000 tutorial case is stored under:

  • For V3 of WRF: /g/data/sx70/data/JAN00_v3
  • For V4 of WRF: /g/data/sx70/data/JAN00_v4

Run scripts for WPS, real.exe and wrf.exe

Example scripts to submit WPS, real.exe and wrf.exe to the queues (do not try to run these on the login nodes) are provided under WPS and WRFV3/run. The files are named run_WPS.sh, run_real and run_mpi. Those scripts allow you to run WRF from any filesystem independently from the location of the source code. In particular, you can copy WRFV3/run on /scratch as well as the inputs needed for WPS and then run from /scratch (which is recommended). Those files are good to use as-is for the tutorial. For other configurations, feel free to configure those to your needs.

Running with OpenMP

For some configurations, it might be advantageous to run with OpenMP in addition to OpenMPI or OpenMP alone for some idealised cases. It is impossible to know beforehand whether OpenMP will improve or deteriorate performances. You will need to try your simulation on a short time to see the effect.

By default, the compilation is done for OpenMPI and OpenMP so you don't need to recompile. Below is an example of an mpirun command with OpenMP as well: export OMP_NUM_THREADS=2 NCORE=$PBS_NCPUS/$OMP_NUM_THREADS mpirun -np $NCORE --map-by node:PE=$OMP_NUM_THREADS --rank-by core ./wrf.exe