Difference between revisions of "NetCDF Compression Tools"

 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
  
 
+
 
  
 
= Why compress netCDF files? =
 
= Why compress netCDF files? =
Line 9: Line 9:
  
 
Compression with tools such as gzip is possible but is not recommended except for archival purposes. It has the disadvantage that the file must be decompressed to be read and then recompressed again when you have finished, which can be time consuming and degrade your productivity, not to mention the data in question will take up much more room while it is being analysed.
 
Compression with tools such as gzip is possible but is not recommended except for archival purposes. It has the disadvantage that the file must be decompressed to be read and then recompressed again when you have finished, which can be time consuming and degrade your productivity, not to mention the data in question will take up much more room while it is being analysed.
 
 
  
 
= General guidelines =
 
= General guidelines =
Line 32: Line 30:
 
= Compression tools =
 
= Compression tools =
  
There are some software packages available on raijin that can be used to compress netCDF data:
+
There are some software packages available on Gadi that can be used to compress netCDF data:
  
=== nco ===
+
<span style="font-size:large">nccompress (recommended)</span>
  
The [http://nco.sourceforge.net netCDF Operator (NCO) program suite] can compress netCDF files and has recently [http://nco.sourceforge.net/nco.html#Compression included some ability to choose different chunking strategies]. It may be that for some cases this is a reasonable solution based on the available options, but a weakness is the inability to use their optimised chunking strategy for variables with four dimensions or more.
+
The <tt>nccompress</tt> package is available on Gadi in the [[Conda|CMS conda environment]]. At present it consists of four python programs, <tt>ncfind</tt>, <tt>nc2nc</tt>, <tt>nccompress</tt> and <tt>ncvarinfo</tt>, written and supported by [http://www.climatescience.org.au/staff/profile/AHeerdegen Aidan]. <tt>nccompress</tt> can copy netCDF files with compression and an optimised chunking strategy that has reasonable performance for many datasets. Its two main limitations: it is slower than the other programs, and it can only compress netCDF3 or netCDF4 classic format. There is more detail in the following sections.
  
=== cdo ===
+
The convenience utility <tt>ncvarinfo</tt> is also included, and though it has no direct relevance to compression, it is a convenient way to get a summary of the contents of a netCDF file.
  
<span class="cmsl-10">[https://code.zmaw.de/projects/cdo Climate Data Operators] (cdo) can also compress netCDF and offers limited chunking options: </span>auto, grid or lines.
+
<span style="font-size:large">nco</span>
  
&nbsp;
+
The [http://nco.sourceforge.net netCDF Operator (NCO) program suite] can compress netCDF files and has recently [http://nco.sourceforge.net/nco.html#Compression included some ability to choose different chunking strategies]. It may be that for some cases this is a reasonable solution based on the available options, but a weakness is the inability to use their optimised chunking strategy for variables with four dimensions or more.
  
=== netcdf ===
+
<span style="font-size:large">cdo</span>
  
One of the standard tools included in a netCDF installation is [https://docs.unidata.ucar.edu/nug/current/netcdf_utilities_guide.html#guide_nccopy nccopy]. nccopy can compress files and define the chunking using a command line argument (-c). nccopy is a good option if your data file structure changes little, so a chunking scheme can be decided upon and hard coded into scripts. It is not so useful if the dimensions and variables change. Another major limitation is that the chunking is defined by dimensions, not variables. If your data file has variables that share dimensions, but have different combinations or numbers of dimensions it is not possible to determine an optimal chunking strategy for each variable.
+
<span class="cmsl-10">[https://code.zmaw.de/projects/cdo Climate Data Operators] (cdo) can also compress netCDF and offers limited chunking options: </span>auto, grid or lines.
  
=== nccompress ===
+
<span style="font-size:large">netcdf</span>
  
The nccompress package is available on gadi&nbsp;in the CMS conda environment. At present it consists of three python programs, ncfind, nc2nc and nccompress, written and supported by [http://www.climatescience.org.au/staff/profile/AHeerdegen Aidan]. nc2nc can copy netCDF files with compression and an optimised chunking strategy that has reasonable performance for many datasets. This two main limitations: it is slower than the other programs, and it can only compress netCDF3 or netCDF4 classic format. There is more detail in the following sections.
+
One of the standard tools included in a netCDF installation is <tt>[https://docs.unidata.ucar.edu/nug/current/netcdf_utilities_guide.html#guide_nccopy nccopy]</tt>. <tt>nccopy</tt> can compress files and define the chunking using a command line argument (-c). <tt>nccopy</tt> is a good option if your data file structure changes little, so a chunking scheme can be decided upon and hard coded into scripts. It is not so useful if the dimensions and variables change. Another major limitation is that the chunking is defined by dimensions, not variables. If your data file has variables that share dimensions, but have different combinations or numbers of dimensions it is not possible to determine an optimal chunking strategy for each variable.
  
=== ncvarinfo ===
+
= Identifying files to be compressed =
  
The convenience utility ncvarinfo is also included, and though it has no direct relevance to compression, it is a convenient way to get a summary of the contents of a netCDF file.
+
The Unix utility <tt>find</tt> is installed by default on any Unix operating system. It is used to find files on a filesystem. For example, to find all files in the directory "directoryname" which end in ".nc":
 
+
<syntaxhighlight lang="text">find directoryname -iname "*.nc"
== Identifying files to be compressed ==
+
</syntaxhighlight>
  
ncfind, part of the nccompress package, can be used to find netCDF files and discriminate between compressed and uncompressed:
+
However, if your netCDF files do not use the convention of ending in ".nc" or cannot be systematically found based on filename, you might not be able to use the <tt>find</tt> utility easily.
  
&nbsp;
+
In order to find any netcdf file no matter what its name is, Aidan has developed the <tt>ncfind</tt> utility, part of the nccompress package. It will find netCDF files and discriminate between compressed and uncompressed:
<syntaxhighlight lang="text">
+
<syntaxhighlight lang="text">$ ncfind -h
$ ncfind -h
 
 
usage: ncfind [-h] [-r] [-u | -c] [inputs [inputs ...]]
 
usage: ncfind [-h] [-r] [-u | -c] [inputs [inputs ...]]
  
Line 78: Line 75:
 
   -u, --uncompressed  Find only uncompressed netCDF files (default False)
 
   -u, --uncompressed  Find only uncompressed netCDF files (default False)
 
   -c, --compressed    Find only compressed netCDF files (default False)
 
   -c, --compressed    Find only compressed netCDF files (default False)
 
</syntaxhighlight>
 
There are other methods for finding files, namely the unix utility find utility. For example, to find all files in the directory "directoryname" which end in ".nc": <syntaxhighlight lang="text">
 
find directoryname -iname "*.nc"
 
 
 
</syntaxhighlight>
 
</syntaxhighlight>
However, if your netCDF files do not use the convention of ending in ".nc" or cannot be systematically found based on filename, you can use the ncfind to recursively descend into a directory structure looking for netCDF files: <syntaxhighlight lang="text">
+
You can use the <tt>ncfind</tt> utility to recursively descend into a directory structure looking for netCDF files: <syntaxhighlight lang="text">ncfind -r directoryname
ncfind -r directoryname
 
 
</syntaxhighlight>
 
</syntaxhighlight>
You can refine the search further by requesting to return only those files that are uncompressed: <syntaxhighlight lang="text">
+
You can refine the search further by requesting to return only those files that are uncompressed: <syntaxhighlight lang="text">ncfind -r -u directoryname
ncfind -r -u directoryname
 
 
</syntaxhighlight>
 
</syntaxhighlight>
If you want to find out how much space these uncompressed files occupy you can combine this command with other unix utilities such as xargs and du: <syntaxhighlight lang="text">
+
If you want to find out how much space these uncompressed files occupy, you can combine this command with other Unix utilities such as <tt>xargs</tt> and <tt>du</tt>: <syntaxhighlight lang="text">ncfind -r -u directoryname | xargs du -h
ncfind -r -u directoryname | xargs du -h
 
 
</syntaxhighlight>
 
</syntaxhighlight>
du is the disk usage utility. The output looks something like this: <syntaxhighlight lang="text">
+
<tt>du</tt> is the disk usage utility. The output looks something like this: <syntaxhighlight lang="text">67M    output212/ice__212_223.nc
67M    output212/ice__212_223.nc
 
 
1003M  output212/ocean__212_223.nc
 
1003M  output212/ocean__212_223.nc
 
1.1G    total
 
1.1G    total
 
 
</syntaxhighlight>
 
</syntaxhighlight>
It is even possible to combine the system find utility with ncfind, using a unix pipe (|). This command will find all files ending in ".nc", pipe the results to ncfind, and only those that are uncompressed will be printed to the screen: <syntaxhighlight lang="text">
+
It is even possible to combine the system <tt>find</tt> utility with <tt>ncfind</tt>, using a Unix pipe (|). This can be faster than using <tt>ncfind</tt> on its own when it is easy to use a pattern to identify the netcdf files. This command will find all files ending in ".nc", pipe the results to <tt>ncfind</tt>, and only those that are uncompressed will be printed to the screen: <syntaxhighlight lang="text">find directoryname -iname "*.nc" | ncfind -u
find directoryname -iname "*.nc" | ncfind -u
 
 
</syntaxhighlight>
 
</syntaxhighlight>
  
== Batch Compressing files ==
 
  
Having identified where the netCDF files you wish to compress are located, there is a convenience program, nccompress, which can be used to easily step through and compress each file in turn:
+
 
<syntaxhighlight lang="text">
+
= Batch Compressing files: nccompress =
$ nccompress -h
+
 
 +
Having identified where the netCDF files you wish to compress are located, there is a convenience program, <tt>nccompress</tt>, which can be used to easily step through and compress each file in turn:
 +
<syntaxhighlight lang="text">$ nccompress -h
 
usage: nccompress [-h] [-d {1-9}] [-n] [-b BUFFERSIZE] [-t TMPDIR] [-v] [-r]
 
usage: nccompress [-h] [-d {1-9}] [-n] [-b BUFFERSIZE] [-t TMPDIR] [-v] [-r]
 
                   [-o] [-m MAXCOMPRESS] [-p] [-f] [-c] [-pa] [-np NUMPROC]
 
                   [-o] [-m MAXCOMPRESS] [-p] [-f] [-c] [-pa] [-np NUMPROC]
Line 148: Line 135:
 
                         operation
 
                         operation
 
   --nccopy              Use nccopy instead of nc2nc (default False)
 
   --nccopy              Use nccopy instead of nc2nc (default False)
 +
</syntaxhighlight>
 +
 +
<span style="font-size:large">Simple workflow</span>
  
 +
The simplest way to invoke the program would be with a single file:
 +
<syntaxhighlight lang="text">nccompress ice_daily_0001.nc
 
</syntaxhighlight>
 
</syntaxhighlight>
The simplest way to invoke the program would be with a single file: <syntaxhighlight lang="text">
+
or using a wildcard expression: <syntaxhighlight lang="text">nccompress ice*.nc
nccompress ice_daily_0001.nc
 
</syntaxhighlight>
 
or using a wildcard expression: <syntaxhighlight lang="text">
 
nccompress ice*.nc
 
 
</syntaxhighlight>
 
</syntaxhighlight>
You can also specify one or more directory names in combination with the recursive flag (-r) and the program will recursively descend into those directories and find all netCDF files contained therein. For example, a directory listing might look like so: <syntaxhighlight lang="text">
+
You can also specify one or more directory names in combination with the recursive flag (-r) and the program will recursively descend into those directories and compress all uncompressed netCDF files contained therein. For example, a directory listing might look like so: <syntaxhighlight lang="text">$ ls data/
$ ls data/
 
 
output001  output003  output005  output007  output009  restart001  restart003  restart005  restart007  restart009
 
output001  output003  output005  output007  output009  restart001  restart003  restart005  restart007  restart009
 
output002  output004  output006  output008  output010  restart002  restart004  restart006  restart008  restart010
 
output002  output004  output006  output008  output010  restart002  restart004  restart006  restart008  restart010
 
</syntaxhighlight>
 
</syntaxhighlight>
with a number of sub-directories, all containing netCDF files. It is a good idea to do a trial run and make sure it functions properly. For example, this will compress the netCDF files in just one of the directories: <syntaxhighlight lang="text">
+
with a number of sub-directories, all containing netCDF files. It is a good idea to do a trial run and make sure it functions properly. For example, this will compress the netCDF files in just one of the directories, if they are not already compressed: <syntaxhighlight lang="text">nccompress -p -r data/output001
nccompress -p -r data/output001
 
 
</syntaxhighlight>
 
</syntaxhighlight>
Once completed there will be a new subdirectory called tmp.nc_compress inside the directory output001. It will contain compressed copies of all the netCDF files from the directory above. You can check the compressed copies to make sure they are correct. The paranoid option (-p) calls an nco command to check that the variables contained in the two files are the same. You can use the paranoid option routinely, thought it will make the process more time consuming. It is a good idea to use it in the testing phase. You should also check the compressed copies manually to make sure they look ok, and if so, re-run the command with the -o option (overwrite): <syntaxhighlight lang="text">
+
Once completed there will be a new subdirectory called tmp.nc_compress inside the directory output001. It will contain compressed copies of all the netCDF files from the directory above. You can check the compressed copies to make sure they are correct. The paranoid option (-p) calls an nco command to check that the variables contained in the two files are the same. You can use the paranoid option routinely, though it will make the process more time consuming. It is a good idea to use it in the testing phase. You should also check the compressed copies manually to make sure they look ok, and if so, re-run the command with the -o option (overwrite): <syntaxhighlight lang="text">nccompress -r -o data/output001
nccompress -r -o data/output001
 
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Line 172: Line 157:
 
So, by default, nccompress '''does not overwrite the original files'''. If you invoke it without the '-o' option it will create compressed copies in the tmp.nc_compress subdirectory and leave them there, which will consume more disk space! This is a feature, not a bug, but you need to be aware that this is how it functions.
 
So, by default, nccompress '''does not overwrite the original files'''. If you invoke it without the '-o' option it will create compressed copies in the tmp.nc_compress subdirectory and leave them there, which will consume more disk space! This is a feature, not a bug, but you need to be aware that this is how it functions.
  
With large variables, which usually means large files (> 1GB) it is a good idea to specify a larger buffer size with the '-b' option, as it will run faster. On raijin this may mean you need to run interactively with a higher memory (~10GB) or submit it as a copyq job. A typical buffer size might be 1000 -> 5000 (1->5 GB).
+
With large variables, which usually means large files (> 1GB) it is a good idea to specify a larger buffer size with the '-b' option, as it will run faster. On Gadi this may mean you need to submit an interactive job on the compute nodes with a higher memory (~10GB) or submit it as a copyq job. A typical buffer size might be 1000 -> 5000 (1->5 GB).
 +
 
 +
<span style="font-size:large">Other options</span>
  
 
It is also possible to use wildcards type operations, e.g.
 
It is also possible to use wildcards type operations, e.g.
 
+
<syntaxhighlight lang="text">nccompress -r -o output*
&nbsp;
 
<syntaxhighlight lang="text">
 
nccompress -r -o output*
 
  
 
nccompress -r -o output00[1-5]
 
nccompress -r -o output00[1-5]
Line 184: Line 168:
 
nccompress -r -o run[1-5]/output*/ocean*.nc random.nc ice*.nc
 
nccompress -r -o run[1-5]/output*/ocean*.nc random.nc ice*.nc
 
</syntaxhighlight>
 
</syntaxhighlight>
The nccompress program just sorts out finding files/directories etc, it calls nc2nc to do the compression. Using the '--nccopy' forces nccompress to use the nccopy program in place of nc2nc, though the netcdf package must already be loaded for this to work. You can tell nccompress to work on multple files simultaneously with the '-pa' option. By default this will use all the physical processors on the machine, or you can specify how many simultaneous processes you want to with '-np', e.g. <syntaxhighlight lang="text">
+
 
nccompress -r -o -np 16 run[1-5]/output*/ocean*.nc random.nc ice*.nc
+
&nbsp;
 +
 
 +
The <tt>nccompress</tt> program just sorts out finding files/directories etc, it calls <tt>nc2nc</tt> to do the compression. Using the '--nccopy' forces <tt>nccompress</tt> to use the <tt>nccopy</tt> program in place of <tt>nc2nc</tt>, though the netcdf package must already be loaded for this to work.
 +
 
 +
&nbsp;
 +
 
 +
You can tell <tt>nccompress</tt> to work on multple files simultaneously with the '-pa' option. By default this will use all the physical processors on the machine, or you can specify how many simultaneous processes you want to with '-np', e.g.
 +
<syntaxhighlight lang="text">nccompress -r -o -np 16 run[1-5]/output*/ocean*.nc random.nc ice*.nc
 
</syntaxhighlight>
 
</syntaxhighlight>
 
will compress 16 netCDF files at a time (the -np option implies parallel option). As each directory is processed before beginning on a new directory there will be little reduction in execution time if there are few netCDF files in each directory.  
 
will compress 16 netCDF files at a time (the -np option implies parallel option). As each directory is processed before beginning on a new directory there will be little reduction in execution time if there are few netCDF files in each directory.  
 
== nc2nc ==
 
== nc2nc ==
  
The nc2nc program was written because no existing tool had a generalised per variable chunking algorithm. The total chunk size is defined to be the file system block size (4096KB). The dimensions of the chunk are sized to be as close as possible to the same ratio as the dimensions of the data, with the limits that no dimension can be less than 1. This chunking scheme performs well for a wide range of data, but there will always be cases for certain types of access, or variable shape that this is not optimal. In those cases a different approach may be required.
+
The <tt>nc2nc</tt> program was written because no existing tool had a generalised per variable chunking algorithm. The total chunk size is defined to be the file system block size (4096KB). The dimensions of the chunk are sized to be as close as possible to the same ratio as the dimensions of the data, with the limits that no dimension can be less than 1. This chunking scheme performs well for a wide range of data, but there will always be cases for certain types of access, or variable shape that this is not optimal. In those cases a different approach may be required.
  
Be aware that nc2nc takes at least twice as long to compress an equivalent file as nccopy. In some cases with large files containing many variables it can be up to five times slower.
+
Be aware that <tt>nc2nc</tt> takes at least twice as long to compress an equivalent file as <tt>nccopy</tt>. In some cases with large files containing many variables it can be up to five times slower.
  
You can use nc2nc “stand alone”. It has a couple of extra features that can only be accessed by calling it directly:
+
You can use <tt>nc2nc</tt> “stand alone”. It has a couple of extra features that can only be accessed by calling it directly:
<syntaxhighlight lang="text">
+
<syntaxhighlight lang="text">$ nc2nc -h
$ nc2nc -h
 
 
usage: nc2nc [-h] [-d {1-9}] [-m MINDIM] [-b BUFFERSIZE] [-n] [-v] [-c] [-f]
 
usage: nc2nc [-h] [-d {1-9}] [-m MINDIM] [-b BUFFERSIZE] [-n] [-v] [-c] [-f]
 
             [-va VARS] [-q QUANTIZE] [-o]
 
             [-va VARS] [-q QUANTIZE] [-o]
Line 230: Line 220:
 
                         is to not overwrite)
 
                         is to not overwrite)
 
</syntaxhighlight>
 
</syntaxhighlight>
With the vars option (-va) it is possible to select out only a subset of variables to be copied to the destination file. By default the output file is netCDf4 classic, but this can be changed to netCDF4 using the '-c' option. It is also possible to specify a minimum dimension size for the chunks (-m). This may be desirable for a dataset that has one particularly long dimension,. The chunk dimensions would mirror this and be very large in this direction . If fast access is required from slices orthogonal to this direction performance might be improved setting this option to a number greater than 1.  
+
With the vars option (-va) it is possible to select out only a subset of variables to be copied to the destination file. By default the output file is netCDf4 classic, but this can be changed to netCDF4 using the '-c' option. It is also possible to specify a minimum dimension size for the chunks (-m). This may be desirable for a dataset that has one particularly long dimension. The chunk dimensions would mirror this and be very large in this direction. If fast access is required from slices orthogonal to this direction performance might be improved setting this option to a number greater than 1.  
 
== ncvarinfo ==
 
== ncvarinfo ==
  
 
ncvarinfo is a convenient way to get a summary of the contents of a netCDF file.
 
ncvarinfo is a convenient way to get a summary of the contents of a netCDF file.
<syntaxhighlight lang="text">
+
<syntaxhighlight lang="text">./ncvarinfo -h
./ncvarinfo -h
 
 
usage: ncvarinfo [-h] [-v] [-t] [-d] [-a] [-va VARS] inputs [inputs ...]
 
usage: ncvarinfo [-h] [-v] [-t] [-d] [-a] [-va VARS] inputs [inputs ...]
  
Line 253: Line 242:
  
 
</syntaxhighlight>
 
</syntaxhighlight>
By default it prints out a simple summary of the variables in a netCDF file, but omitting dimensions and time related variables. e.g. <syntaxhighlight lang="text">
+
By default it prints out a simple summary of the variables in a netCDF file, but omitting dimensions and time related variables. e.g. <syntaxhighlight lang="text">ncvarinfo output096/ocean_daily.nc
ncvarinfo output096/ocean_daily.nc
 
  
 
output096/ocean_daily.nc
 
output096/ocean_daily.nc
Line 266: Line 254:
  
 
</syntaxhighlight>
 
</syntaxhighlight>
If you specify more than one file it will print the information for each file in turn <syntaxhighlight lang="text">
+
If you specify more than one file it will print the information for each file in turn <syntaxhighlight lang="text">ncvarinfo output09?/ocean_daily.nc
ncvarinfo output09?/ocean_daily.nc
 
  
 
output096/ocean_daily.nc
 
output096/ocean_daily.nc
Line 307: Line 294:
 
geolat_c :: (1080, 1440)      :: uv latitude
 
geolat_c :: (1080, 1440)      :: uv latitude
 
</syntaxhighlight>
 
</syntaxhighlight>
If the files have the same structure it is possible to aggregate the data and display it as if it were contained in a single dataset: <syntaxhighlight lang="text">
+
If the files have the same structure it is possible to aggregate the data and display it as if it were contained in a single dataset: <syntaxhighlight lang="text">ncvarinfo -a output09?/ocean_daily.nc
ncvarinfo -a output09?/ocean_daily.nc
 
  
 
Time steps:  1460  x  1.0 days
 
Time steps:  1460  x  1.0 days
Line 318: Line 304:
 
geolat_c :: (1080, 1440)      :: uv latitude
 
geolat_c :: (1080, 1440)      :: uv latitude
 
</syntaxhighlight>
 
</syntaxhighlight>
You can also just request variables you are interested in to be output: <syntaxhighlight lang="text">
+
You can also just request variables you are interested in to be output: <syntaxhighlight lang="text">ncvarinfo -va tau_x -va tau_y output09?/ocean_daily.nc
ncvarinfo -va tau_x -va tau_y output09?/ocean_daily.nc
 
  
 
output096/ocean_daily.nc
 
output096/ocean_daily.nc

Latest revision as of 20:56, 25 April 2022

 

Why compress netCDF files?

Space. You are likely to store your netCDF files on a shared filesystem either at NCI or at your University, as such, it is your responsibility to manage your space correctly and avoid wasting storage space. Compressing your netCDF data files can shrink your data files to one third of its original size. This is the equivalent of being given three times as much disk space.

NetCDF compression is lossless: the data is exactly as it was when read from disk. It can still be read using the same programming interface. As long as the program reading the data has been compiled with the latest netCDF library (version 4) then the task of decompressing the data is handled by the library and as far as the programs are concerned there is no difference in the data. The usual tools, such as ncdump, can be used to examine the variables contained within the netCDF file.

Compression with tools such as gzip is possible but is not recommended except for archival purposes. It has the disadvantage that the file must be decompressed to be read and then recompressed again when you have finished, which can be time consuming and degrade your productivity, not to mention the data in question will take up much more room while it is being analysed.

General guidelines

The netCDF library has several options for compressing data, which all compression programs will use, as they all use the underlying library to perform the compression. There is a more detailed explanation if you wish to understand more, but briefly:

Deflate level

This is an integer value from ranging from 0 to 9. A value of 0 means no compression, and 9 is the highest level of compression possible. The higher this value the smaller your file will be once compressed. However, there is a trade-off, the higher the deflate level, the longer it will take to compress, particularly so with very high deflate levels. At deflate level 9 it can take six times longer to compress the data, with only a few percent improvement in compression. The recommended deflate level is 5. This combines good compression with a small increase in compression time.

Shuffle

Turn shuffle on. Simple. It usually results in a smaller compressed file with little performance overhead.

Chunking

The netCDF library writes the data to disk in "chunks". There is a very good description of chunking and how it works. All you really have to know is that in order to use netCDF compression your data must be chunked.The question then is, do I care how the program I use chooses the size of my data chunks? The answer is almost certainly yes, but maybe not a lot. An optimal chunking strategy is largely determined by the structure of your data and how you will access it.

Specific details about chunking strategies are largely dependent on the tool used to compress your data, and will be covered in more detail in the next section. However, all tools still utilise the underlying netCDF4 library, and so can implement the default chunking strategy, which has changed over time. For many versions the default strategy has been to create chunks that are simply the same size as the dimensions of the variable, which can be a disastrous choice in terms of performance if the data is also compressed. The entire variable must be read into memory to be uncompressed even if only a single slice is required.

Compression tools

There are some software packages available on Gadi that can be used to compress netCDF data:

nccompress (recommended)

The nccompress package is available on Gadi in the CMS conda environment. At present it consists of four python programs, ncfind, nc2nc, nccompress and ncvarinfo, written and supported by Aidan. nccompress can copy netCDF files with compression and an optimised chunking strategy that has reasonable performance for many datasets. Its two main limitations: it is slower than the other programs, and it can only compress netCDF3 or netCDF4 classic format. There is more detail in the following sections.

The convenience utility ncvarinfo is also included, and though it has no direct relevance to compression, it is a convenient way to get a summary of the contents of a netCDF file.

nco

The netCDF Operator (NCO) program suite can compress netCDF files and has recently included some ability to choose different chunking strategies. It may be that for some cases this is a reasonable solution based on the available options, but a weakness is the inability to use their optimised chunking strategy for variables with four dimensions or more.

cdo

Climate Data Operators (cdo) can also compress netCDF and offers limited chunking options: auto, grid or lines.

netcdf

One of the standard tools included in a netCDF installation is nccopy. nccopy can compress files and define the chunking using a command line argument (-c). nccopy is a good option if your data file structure changes little, so a chunking scheme can be decided upon and hard coded into scripts. It is not so useful if the dimensions and variables change. Another major limitation is that the chunking is defined by dimensions, not variables. If your data file has variables that share dimensions, but have different combinations or numbers of dimensions it is not possible to determine an optimal chunking strategy for each variable.

Identifying files to be compressed

The Unix utility find is installed by default on any Unix operating system. It is used to find files on a filesystem. For example, to find all files in the directory "directoryname" which end in ".nc":

find directoryname -iname "*.nc"

However, if your netCDF files do not use the convention of ending in ".nc" or cannot be systematically found based on filename, you might not be able to use the find utility easily.

In order to find any netcdf file no matter what its name is, Aidan has developed the ncfind utility, part of the nccompress package. It will find netCDF files and discriminate between compressed and uncompressed:

$ ncfind -h
usage: ncfind [-h] [-r] [-u | -c] [inputs [inputs ...]]

Find netCDF files. Can discriminate by compression

positional arguments:
  inputs              netCDF files or directories (-r must be specified to
                      recursively descend directories). Can accept piped
                      arguments.

optional arguments:
  -h, --help          show this help message and exit
  -r, --recursive     Recursively descend directories to find netCDF
                      files (default False)
  -u, --uncompressed  Find only uncompressed netCDF files (default False)
  -c, --compressed    Find only compressed netCDF files (default False)

You can use the ncfind utility to recursively descend into a directory structure looking for netCDF files:

ncfind -r directoryname

You can refine the search further by requesting to return only those files that are uncompressed:

ncfind -r -u directoryname

If you want to find out how much space these uncompressed files occupy, you can combine this command with other Unix utilities such as xargs and du:

ncfind -r -u directoryname | xargs du -h

du is the disk usage utility. The output looks something like this:

67M     output212/ice__212_223.nc
1003M   output212/ocean__212_223.nc
1.1G    total

It is even possible to combine the system find utility with ncfind, using a Unix pipe (|). This can be faster than using ncfind on its own when it is easy to use a pattern to identify the netcdf files. This command will find all files ending in ".nc", pipe the results to ncfind, and only those that are uncompressed will be printed to the screen:

find directoryname -iname "*.nc" | ncfind -u


Batch Compressing files: nccompress

Having identified where the netCDF files you wish to compress are located, there is a convenience program, nccompress, which can be used to easily step through and compress each file in turn:

$ nccompress -h
usage: nccompress [-h] [-d {1-9}] [-n] [-b BUFFERSIZE] [-t TMPDIR] [-v] [-r]
                   [-o] [-m MAXCOMPRESS] [-p] [-f] [-c] [-pa] [-np NUMPROC]
                   [--nccopy]
                   inputs [inputs ...]

Run nc2nc (or nccopy) on a number of netCDF files

positional arguments:
  inputs                netCDF files or directories (-r must be specified to
                        recursively descend directories)

optional arguments:
  -h, --help            show this help message and exit
  -d {1-9}, --dlevel {1-9}
                        Set deflate level. Valid values 0-9 (default=5)
  -n, --noshuffle       Don't shuffle on deflation (default is to shuffle)
  -b BUFFERSIZE, --buffersize BUFFERSIZE
                        Set size of copy buffer in MB (default=50)
  -t TMPDIR, --tmpdir TMPDIR
                        Specify temporary directory to save compressed files
  -v, --verbose         Verbose output
  -r, --recursive       Recursively descend directories compressing all netCDF
                        files (default False)
  -o, --overwrite       Overwrite original files with compressed versions
                        (default is to not overwrite)
  -m MAXCOMPRESS, --maxcompress MAXCOMPRESS
                        Set a maximum compression as a paranoid check on
                        success of nccopy (default is 10, set to zero for no
                        check)
  -p, --paranoid        Paranoid check : run nco ndiff on the resulting file
                        ensure no data has been altered
  -f, --force           Force compression, even if input file is already
                        compressed (default False)
  -c, --clean           Clean tmpdir by removing existing compressed files
                        before starting (default False)
  -pa, --parallel       Compress files in parallel
  -np NUMPROC, --numproc NUMPROC
                        Specify the number of processes to use in parallel
                        operation
  --nccopy              Use nccopy instead of nc2nc (default False)

Simple workflow

The simplest way to invoke the program would be with a single file:

nccompress ice_daily_0001.nc

or using a wildcard expression:

nccompress ice*.nc

You can also specify one or more directory names in combination with the recursive flag (-r) and the program will recursively descend into those directories and compress all uncompressed netCDF files contained therein. For example, a directory listing might look like so:

$ ls data/
output001  output003  output005  output007  output009  restart001  restart003  restart005  restart007  restart009
output002  output004  output006  output008  output010  restart002  restart004  restart006  restart008  restart010

with a number of sub-directories, all containing netCDF files. It is a good idea to do a trial run and make sure it functions properly. For example, this will compress the netCDF files in just one of the directories, if they are not already compressed:

nccompress -p -r data/output001

Once completed there will be a new subdirectory called tmp.nc_compress inside the directory output001. It will contain compressed copies of all the netCDF files from the directory above. You can check the compressed copies to make sure they are correct. The paranoid option (-p) calls an nco command to check that the variables contained in the two files are the same. You can use the paranoid option routinely, though it will make the process more time consuming. It is a good idea to use it in the testing phase. You should also check the compressed copies manually to make sure they look ok, and if so, re-run the command with the -o option (overwrite):

nccompress -r -o data/output001

and it will find the already compressed files, copy them over the originals and delete the temporary directory tmp.nc_compress. It won’t try to compress the files again. It also won’t compress already compressed files, so, for example, if you were happy that the compression was working well you could compress the entire data directory, and the already compressed files in output001 will not be re-compressed.

So, by default, nccompress does not overwrite the original files. If you invoke it without the '-o' option it will create compressed copies in the tmp.nc_compress subdirectory and leave them there, which will consume more disk space! This is a feature, not a bug, but you need to be aware that this is how it functions.

With large variables, which usually means large files (> 1GB) it is a good idea to specify a larger buffer size with the '-b' option, as it will run faster. On Gadi this may mean you need to submit an interactive job on the compute nodes with a higher memory (~10GB) or submit it as a copyq job. A typical buffer size might be 1000 -> 5000 (1->5 GB).

Other options

It is also possible to use wildcards type operations, e.g.

nccompress -r -o output*

nccompress -r -o output00[1-5]

nccompress -r -o run[1-5]/output*/ocean*.nc random.nc ice*.nc

 

The nccompress program just sorts out finding files/directories etc, it calls nc2nc to do the compression. Using the '--nccopy' forces nccompress to use the nccopy program in place of nc2nc, though the netcdf package must already be loaded for this to work.

 

You can tell nccompress to work on multple files simultaneously with the '-pa' option. By default this will use all the physical processors on the machine, or you can specify how many simultaneous processes you want to with '-np', e.g.

nccompress -r -o -np 16 run[1-5]/output*/ocean*.nc random.nc ice*.nc

will compress 16 netCDF files at a time (the -np option implies parallel option). As each directory is processed before beginning on a new directory there will be little reduction in execution time if there are few netCDF files in each directory.

nc2nc

The nc2nc program was written because no existing tool had a generalised per variable chunking algorithm. The total chunk size is defined to be the file system block size (4096KB). The dimensions of the chunk are sized to be as close as possible to the same ratio as the dimensions of the data, with the limits that no dimension can be less than 1. This chunking scheme performs well for a wide range of data, but there will always be cases for certain types of access, or variable shape that this is not optimal. In those cases a different approach may be required.

Be aware that nc2nc takes at least twice as long to compress an equivalent file as nccopy. In some cases with large files containing many variables it can be up to five times slower.

You can use nc2nc “stand alone”. It has a couple of extra features that can only be accessed by calling it directly:

$ nc2nc -h
usage: nc2nc [-h] [-d {1-9}] [-m MINDIM] [-b BUFFERSIZE] [-n] [-v] [-c] [-f]
             [-va VARS] [-q QUANTIZE] [-o]
             origin destination

Make a copy of a netCDF file with automatic chunk sizing

positional arguments:
  origin                netCDF file to be compressed
  destination           netCDF output file

optional arguments:
  -h, --help            show this help message and exit
  -d {1-9}, --dlevel {1-9}
                        Set deflate level. Valid values 0-9 (default=5)
  -m MINDIM, --mindim MINDIM
                        Minimum dimension of chunk. Valid values 1-dimsize
  -b BUFFERSIZE, --buffersize BUFFERSIZE
                        Set size of copy buffer in MB (default=50)
  -n, --noshuffle       Don't shuffle on deflation (default is to shuffle)
  -v, --verbose         Verbose output
  -c, --classic         use NETCDF4_CLASSIC output instead of NETCDF4 (default
                        true)
  -f, --fletcher32      Activate Fletcher32 checksum
  -va VARS, --vars VARS
                        Specify variables to copy (default is to copy all)
  -q QUANTIZE, --quantize QUANTIZE
                        Truncate data in variable to a given decimal
                        precision, e.g. -q speed=2 -q temp=0 causes variable
                        speed to be truncated to a precision of 0.01 and temp
                        to a precision of 1
  -o, --overwrite       Write output file even if already it exists (default
                        is to not overwrite)

With the vars option (-va) it is possible to select out only a subset of variables to be copied to the destination file. By default the output file is netCDf4 classic, but this can be changed to netCDF4 using the '-c' option. It is also possible to specify a minimum dimension size for the chunks (-m). This may be desirable for a dataset that has one particularly long dimension. The chunk dimensions would mirror this and be very large in this direction. If fast access is required from slices orthogonal to this direction performance might be improved setting this option to a number greater than 1.

ncvarinfo

ncvarinfo is a convenient way to get a summary of the contents of a netCDF file.

./ncvarinfo -h
usage: ncvarinfo [-h] [-v] [-t] [-d] [-a] [-va VARS] inputs [inputs ...]

Output summary information about a netCDF file

positional arguments:
  inputs                netCDF files

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         Verbose output
  -t, --time            Show time variables
  -d, --dims            Show dimensions
  -a, --aggregate       Aggregate multiple netCDF files into one dataset
  -va VARS, --vars VARS
                        Show info for only specify variables

By default it prints out a simple summary of the variables in a netCDF file, but omitting dimensions and time related variables. e.g.

ncvarinfo output096/ocean_daily.nc

output096/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x    :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y    :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity
geolon_t :: (1080, 1440)      :: tracer longitude
geolat_t :: (1080, 1440)      :: tracer latitude
geolon_c :: (1080, 1440)      :: uv longitude
geolat_c :: (1080, 1440)      :: uv latitude

If you specify more than one file it will print the information for each file in turn

ncvarinfo output09?/ocean_daily.nc

output096/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x    :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y    :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity
geolon_t :: (1080, 1440)      :: tracer longitude
geolat_t :: (1080, 1440)      :: tracer latitude
geolon_c :: (1080, 1440)      :: uv longitude
geolat_c :: (1080, 1440)      :: uv latitude

output097/ocean_daily.nc
Time steps:  365  x  1.0 days
output098/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x    :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y    :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity
geolon_t :: (1080, 1440)      :: tracer longitude
geolat_t :: (1080, 1440)      :: tracer latitude
geolon_c :: (1080, 1440)      :: uv longitude
geolat_c :: (1080, 1440)      :: uv latitude

output098/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x    :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y    :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity
geolon_t :: (1080, 1440)      :: tracer longitude
geolat_t :: (1080, 1440)      :: tracer latitude
geolon_c :: (1080, 1440)      :: uv longitude
geolat_c :: (1080, 1440)      :: uv latitude

output099/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x    :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y    :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity
geolon_t :: (1080, 1440)      :: tracer longitude
geolat_t :: (1080, 1440)      :: tracer latitude
geolon_c :: (1080, 1440)      :: uv longitude
geolat_c :: (1080, 1440)      :: uv latitude

If the files have the same structure it is possible to aggregate the data and display it as if it were contained in a single dataset:

ncvarinfo -a output09?/ocean_daily.nc

Time steps:  1460  x  1.0 days
tau_x    :: (1460, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y    :: (1460, 1080, 1440) :: j-directed wind stress forcing v-velocity
geolon_t :: (1080, 1440)       :: tracer longitude
geolat_t :: (1080, 1440)       :: tracer latitude
geolon_c :: (1080, 1440)       :: uv longitude
geolat_c :: (1080, 1440)       :: uv latitude

You can also just request variables you are interested in to be output:

ncvarinfo -va tau_x -va tau_y output09?/ocean_daily.nc

output096/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity

output097/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity

output098/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity

output099/ocean_daily.nc
Time steps:  365  x  1.0 days
tau_x :: (365, 1080, 1440) :: i-directed wind stress forcing u-velocity
tau_y :: (365, 1080, 1440) :: j-directed wind stress forcing v-velocity