PIO  2.5.4
Cmake Testing Information

Building PIO Tests

To build both the Unit and Performance tests for PIO, follow the general instructions for building PIO in either the Installation page or the [Machine Walk-Through](CMake Install Walk-through) page. During the Build step after (or instead of) the make command, type make tests.

PIO Unit Tests

The Parallel IO library comes with more than 20 built-in unit tests to verify that the library is installed and working correctly. These tests utilize the CMake and CTest automation framework. Because the Parallel IO library is built for parallel applications, the unit tests should be run in a parallel environment. The simplest way to do this is to submit a PBS job to run the ctest command.

For a library built into the example directory /scratch/user/PIO_build/, an example PBS script would be:

  #!/bin/bash

  #PBS -q normal
  #PBS -l nodes=1:ppn=4
  #PBS -N piotests
  #PBS -e piotests.e$PBS_JOBID
  #PBS -o piotests.o$PBS_JOBID

  cd /scratch/user/PIO_build
  ctest

The output from the unit tests will be reported in the piotests.o$JOBID file. This should look something like:

Test project /scratch/cluster/katec/PIO_build
Start 1: test_names
1/24 Test #1: test_names ....................... Passed 0.60 sec
Start 2: test_nc4
2/24 Test #2: test_nc4 ......................... Passed 0.53 sec
Start 3: pio_unit_test
3/24 Test #3: pio_unit_test .................... Passed 0.45 sec
Start 4: init_finialize_1_proc
4/24 Test #4: init_finialize_1_proc ............ Passed 0.54 sec
Start 5: init_finialize_2_proc
5/24 Test #5: init_finialize_2_proc ............ Passed 0.53 sec
Start 6: init_finalize_2_proc_with_args
6/24 Test #6: init_finalize_2_proc_with_args ... Passed 0.55 sec
Start 7: pio_file_simple_tests
7/24 Test #7: pio_file_simple_tests ............ Passed 0.58 sec
Start 8: pio_file_fail
8/24 Test #8: pio_file_fail .................... Passed 0.62 sec
Start 9: ncdf_simple_tests
9/24 Test #9: ncdf_simple_tests ................ Passed 0.60 sec
Start 10: ncdf_get_put_1proc
10/24 Test #10: ncdf_get_put_1proc ............... Passed 0.65 sec
Start 11: ncdf_get_put_2proc
11/24 Test #11: ncdf_get_put_2proc ............... Passed 0.63 sec
Start 12: ncdf_fail
12/24 Test #12: ncdf_fail ........................ Passed 0.52 sec
Start 13: pio_decomp_tests_1p
13/24 Test #13: pio_decomp_tests_1p .............. Passed 1.54 sec
Start 14: pio_decomp_tests_2p
14/24 Test #14: pio_decomp_tests_2p .............. Passed 1.99 sec
Start 15: pio_decomp_tests_3p
15/24 Test #15: pio_decomp_tests_3p .............. Passed 2.11 sec
Start 16: pio_decomp_tests_4p_1agg
16/24 Test #16: pio_decomp_tests_4p_1agg ......... Passed 2.12 sec
Start 17: pio_decomp_tests_4p_2agg
17/24 Test #17: pio_decomp_tests_4p_2agg ......... Passed 2.08 sec
Start 18: pio_decomp_tests_4p_3agg
18/24 Test #18: pio_decomp_tests_4p_3agg ......... Passed 2.08 sec
Start 19: pio_decomp_tests_4p_1iop
19/24 Test #19: pio_decomp_tests_4p_1iop ......... Passed 1.91 sec
Start 20: pio_decomp_tests_4p_2iop
20/24 Test #20: pio_decomp_tests_4p_2iop ......... Passed 2.50 sec
Start 21: pio_decomp_tests_4p_3iop
21/24 Test #21: pio_decomp_tests_4p_3iop ......... Passed 2.20 sec
Start 22: pio_decomp_tests_4p_2iop_2str
22/24 Test #22: pio_decomp_tests_4p_2iop_2str .... Passed 2.16 sec
Start 23: pio_decomp_tests_4p_2iop_1agg
23/24 Test #23: pio_decomp_tests_4p_2iop_1agg .... Passed 2.20 sec
Start 24: pio_decomp_fillval2
24/24 Test #24: pio_decomp_fillval2 .............. Passed 0.60 sec
100% tests passed, 0 tests failed out of 24
Total Test time (real) = 30.80 sec

Another option would be to launch an interactive session, change into the build directory, and run the ctest command.

On Yellowstone, the unit tests can run using the execca or execgy commands as:

  > setenv DAV_CORES 4
  > execca ctest

PIO Performance Test

To run the performance tests, you will need to add two files to the tests/performance** subdirectory of the PIO build directory. First, you will need a decomp file. You can download one from our google code page here: https://svn-ccsm-piodecomps.cgd.ucar.edu/trunk/ .

You can use any of these files, and save them to your home or base work directory. Secondly, you will need to add a namelist file, named "pioperf.nl". Save this file in the directory with your pioperf executable (this is found in the tests/performance subdirectory of the PIO build directory).

The contents of the namelist file should look like:

 &pioperf

 decompfile = "/u/home/user/piodecomp30tasks01dims06.dat"

 pio_typenames = 'pnetcdf'

 niotasks = 30

 rearrangers = 1

 nvars = 2

 /

Here, the second line ("decompfile") points to the path for your decomp file (wherever you saved it). For the rest of the lines, each item added to the list adds another test to be run. For instance, to test all of the types of supported IO, your pio_typenames would look like:

 pio_typenames = 'pnetcdf','netcdf','netcdf4p','netcdf4c'

HDF5 is netcdf4p, and Parallel-Netcdf is pnetcdf.

To test with different numbers of IO tasks, you could do:

 niotasks = 30,15,5

(These tasks are the subset of the run tasks that are designated IO tasks)

To test with both of the rearranger algorithms:

 rearrangers = 1,2

(Each rearranger is a different algorithm for converting from data in memory to data in a file on disk. The first one, BOX, is the older method from PIO1, the second, SUBSET, is a newer method that seems to be more efficient in large numbers of tasks)

To test with different numbers of variables:

 nvars = 8,5,3,2

(The more variables you use, the higher data throughput goes, usually)

To run, submit a job with 'pioperf' as the executable, and at least as many tasks as you have specified in the decomposition file. On yellowstone, a submit script could look like:

 #!/bin/tcsh

 #BSUB -P P00000000 # project code
 #BSUB -W 00:10 # wall-clock time (hrs:mins)
 #BSUB -n 30 # number of tasks in job
 #BSUB -R "span[ptile=16]" # run 16 MPI tasks per node
 #BSUB -J pio_perftest # job name
 #BSUB -o pio_perftest.%J.out # output file name in which %J is replaced by the job ID
 #BSUB -e pio_perftest.%J.err # error file name in which %J is replaced by the job ID
 #BSUB -q small # queue

 #run the executable
 mpirun.lsf /glade/p/work/katec/pio_work/pio_build/tests/performance/pioperf

The result(s) will look like a line in the output file such as:

RESULT: write BOX 4 30 2 16.9905924688

You can decode this as:

  1. Read/write describes the io operation performed
  2. BOX/SUBSET is the algorithm for the rearranger (as described above)
  3. 4 [1-4] is the io library used for the operation. The options here are [1] Parallel-netcdf [2] NetCDF3 [3] NetCDF4-Compressed [4] NetCDF4-Parallel
  4. 30 [any number] is the number of io-specific tasks used in the operation. Must be less than the number of MPI tasks used in the test.
  5. 2 [any number] is the number of variables read or written during the operation
  6. 16.9905924688 [any number] is the Data Rate of the operation in MB/s. This is the important value for determining performance of the system. The higher this numbre is, the better the PIO2 library is performing for the given operation.

Last updated: 05-17-2016