PIO  2.5.4
Functions
Initialize the IO System

Initialize the IOSystem, including specifying number of IO and computation tasks in C. More...

Functions

int PIOc_Init_Intracomm (MPI_Comm comp_comm, int num_iotasks, int stride, int base, int rearr, int *iosysidp)
 Library initialization used when IO tasks are a subset of compute tasks. More...
 
int PIOc_Init_Intracomm_from_F90 (int f90_comp_comm, const int num_iotasks, const int stride, const int base, const int rearr, rearr_opt_t *rearr_opts, int *iosysidp)
 Interface to call from pio_init from fortran. More...
 
int PIOc_init_async_from_F90 (int f90_world_comm, int num_io_procs, int *io_proc_list, int component_count, int *procs_per_component, int *flat_proc_list, int *f90_io_comm, int *f90_comp_comm, int rearranger, int *iosysidp)
 Interface to call from pio_init from fortran. More...
 
int PIOc_init_async_comms_from_F90 (int f90_world_comm, int component_count, int *f90_comp_comms, int f90_io_comm, int rearranger, int *iosysidp)
 Interface to call from pio_init from fortran. More...
 
int PIOc_init_async (MPI_Comm world, int num_io_procs, int *io_proc_list, int component_count, int *num_procs_per_comp, int **proc_list, MPI_Comm *user_io_comm, MPI_Comm *user_comp_comm, int rearranger, int *iosysidp)
 Library initialization used when IO tasks are distinct from compute tasks. More...
 
int PIOc_init_async_from_comms (MPI_Comm world, int component_count, MPI_Comm *comp_comm, MPI_Comm io_comm, int rearranger, int *iosysidp)
 Library initialization used when IO tasks are distinct from compute tasks. More...
 

Detailed Description

Initialize the IOSystem, including specifying number of IO and computation tasks in C.

Function Documentation

◆ PIOc_init_async()

int PIOc_init_async ( MPI_Comm  world,
int  num_io_procs,
int *  io_proc_list,
int  component_count,
int *  num_procs_per_comp,
int **  proc_list,
MPI_Comm *  user_io_comm,
MPI_Comm *  user_comp_comm,
int  rearranger,
int *  iosysidp 
)

Library initialization used when IO tasks are distinct from compute tasks.

This is a collective call. Input parameters are read on comp_rank=0 values on other tasks are ignored. This variation of PIO_init sets up a distinct set of tasks to handle IO, these tasks do not return from this call. Instead they go to an internal loop and wait to receive further instructions from the computational tasks.

Sequence of Events to do Asynch I/O

Here is the sequence of events that needs to occur when an IO operation is called from the collection of compute tasks. I'm going to use pio_put_var because write_darray has some special characteristics that make it a bit more complicated...

Compute tasks call pio_put_var with an integer argument

The MPI_Send sends a message from comp_rank=0 to io_rank=0 on union_comm (a comm defined as the union of io and compute tasks) msg is an integer which indicates the function being called, in this case the msg is PIO_MSG_PUT_VAR_INT

The iotasks now know what additional arguments they should expect to receive from the compute tasks, in this case a file handle, a variable id, the length of the array and the array itself.

The iotasks now have the information they need to complete the operation and they call the pio_put_var routine. (In pio1 this bit of code is in pio_get_put_callbacks.F90.in)

After the netcdf operation is completed (in the case of an inq or get operation) the result is communicated back to the compute tasks.

Parameters
worldthe communicator containing all the available tasks.
num_io_procsthe number of processes for the IO component.
io_proc_listan array of lenth num_io_procs with the processor number for each IO processor. If NULL then the IO processes are assigned starting at processes 0.
component_countnumber of computational components
num_procs_per_compan array of int, of length component_count, with the number of processors in each computation component.
proc_listan array of arrays containing the processor numbers for each computation component. If NULL then the computation components are assigned processors sequentially starting with processor num_io_procs.
user_io_commpointer to an MPI_Comm. If not NULL, it will get an MPI duplicate of the IO communicator. (It is a full duplicate and later must be freed with MPI_Free() by the caller.)
user_comp_commpointer to an array of pointers to MPI_Comm; the array is of length component_count. If not NULL, it will get an MPI duplicate of each computation communicator. (These are full duplicates and each must later be freed with MPI_Free() by the caller.)
rearrangerthe default rearranger to use for decompositions in this IO system. Only PIO_REARR_BOX is supported for async. Support for PIO_REARR_SUBSET will be provided in a future version.
iosysidppointer to array of length component_count that gets the iosysid for each component.
Returns
PIO_NOERR on success, error code otherwise.
Author
Ed Hartnett

◆ PIOc_init_async_comms_from_F90()

int PIOc_init_async_comms_from_F90 ( int  f90_world_comm,
int  component_count,
int *  f90_comp_comms,
int  f90_io_comm,
int  rearranger,
int *  iosysidp 
)

Interface to call from pio_init from fortran.

Parameters
f90_world_commthe incoming communicator which includes all tasks
component_countthe number of computational components used an iosysid will be generated for each and a comp_comm is expected for each
f90_comp_commsthe comp_comm handles passed from fortran
f90_io_commthe io_comm passed from fortran
rearrangercurrently only PIO_REARRANGE_BOX is supported
iosysidppointer to array of length component_count that gets the iosysid for each component.
Returns
0 for success, error code otherwise
Author
Jim Edwards

◆ PIOc_init_async_from_comms()

int PIOc_init_async_from_comms ( MPI_Comm  world,
int  component_count,
MPI_Comm *  comp_comm,
MPI_Comm  io_comm,
int  rearranger,
int *  iosysidp 
)

Library initialization used when IO tasks are distinct from compute tasks.

This is a collective call. Input parameters are read on each comp_rank=0 and on io_rank=0, values on other tasks are ignored. This variation of PIO_init uses tasks in io_comm to handle IO, these tasks do not return from this call. Instead they go to an internal loop and wait to receive further instructions from the computational tasks.

Sequence of Events to do Asynch I/O

Here is the sequence of events that needs to occur when an IO operation is called from the collection of compute tasks. I'm going to use pio_put_var because write_darray has some special characteristics that make it a bit more complicated...

Compute tasks call pio_put_var with an integer argument

The MPI_Send sends a message from comp_rank=0 to io_rank=0 on union_comm (a comm defined as the union of io and compute tasks) msg is an integer which indicates the function being called, in this case the msg is PIO_MSG_PUT_VAR_INT

The iotasks now know what additional arguments they should expect to receive from the compute tasks, in this case a file handle, a variable id, the length of the array and the array itself.

The iotasks now have the information they need to complete the operation and they call the pio_put_var routine. (In pio1 this bit of code is in pio_get_put_callbacks.F90.in)

After the netcdf operation is completed (in the case of an inq or get operation) the result is communicated back to the compute tasks.

Parameters
worldthe communicator containing all the available tasks.
component_countnumber of computational components
comp_comman array of size component_count which are the defined comms of each component - comp_comm should be MPI_COMM_NULL on tasks outside the tasks of each comm these comms may overlap
io_comma communicator for the IO group, tasks in this comm do not return from this call.
rearrangerthe default rearranger to use for decompositions in this IO system. Only PIO_REARR_BOX is supported for async. Support for PIO_REARR_SUBSET will be provided in a future version.
iosysidppointer to array of length component_count that gets the iosysid for each component.
Returns
PIO_NOERR on success, error code otherwise.
Author
Jim Edwards

◆ PIOc_init_async_from_F90()

int PIOc_init_async_from_F90 ( int  f90_world_comm,
int  num_io_procs,
int *  io_proc_list,
int  component_count,
int *  procs_per_component,
int *  flat_proc_list,
int *  f90_io_comm,
int *  f90_comp_comm,
int  rearranger,
int *  iosysidp 
)

Interface to call from pio_init from fortran.

Parameters
f90_world_commthe incoming communicator which includes all tasks
num_io_procsthe number of IO tasks
io_proc_listthe rank of io tasks in f90_world_comm
component_countthe number of computational components used an iosysid will be generated for each
procs_per_componentthe number of procs in each computational component
flat_proc_lista 1D array of size component_count*maxprocs_per_component with rank in f90_world_comm
f90_io_commthe io_comm handle to be returned to fortran
f90_comp_commthe comp_comm handle to be returned to fortran
rearrangercurrently only PIO_REARRANGE_BOX is supported
iosysidppointer to array of length component_count that gets the iosysid for each component.
Returns
0 for success, error code otherwise
Author
Jim Edwards

◆ PIOc_Init_Intracomm()

int PIOc_Init_Intracomm ( MPI_Comm  comp_comm,
int  num_iotasks,
int  stride,
int  base,
int  rearr,
int *  iosysidp 
)

Library initialization used when IO tasks are a subset of compute tasks.

This function creates an MPI intracommunicator between a set of IO tasks and one or more sets of computational tasks.

The caller must create all comp_comm and the io_comm MPI communicators before calling this function.

Internally, this function does the following:

  • Initialize logging system (if PIO_ENABLE_LOGGING is set).
  • Allocates and initializes the iosystem_desc_t struct (ios).
  • MPI duplicated user comp_comm to ios->comp_comm and ios->union_comm.
  • Set ios->my_comm to be ios->comp_comm. (Not an MPI duplication.)
  • Find MPI rank in comp_comm, determine ranks of IO tasks, determine whether this task is one of the IO tasks.
  • Identify the root IO tasks.
  • Create MPI groups for IO tasks, and for computation tasks.
  • On IO tasks, create an IO communicator (ios->io_comm).
  • Assign an iosystemid, and put this iosystem_desc_t into the list of open iosystems.

When complete, there are three MPI communicators (ios->comp_comm, ios->union_comm, and ios->io_comm) that must be freed by MPI.

Parameters
comp_commthe MPI_Comm of the compute tasks.
num_iotasksthe number of io tasks to use.
stridethe offset between io tasks in the comp_comm. The mod operator is used when computing the IO tasks with the formula:
ios->ioranks[i] = (base + i * ustride) % ios->num_comptasks
.
basethe comp_comm index of the first io task.
rearrthe rearranger to use by default, this may be overriden in the PIO_init_decomp(). The rearranger is not used until the decomposition is initialized.
iosysidpindex of the defined system descriptor.
Returns
0 on success, otherwise a PIO error code.
Author
Jim Edwards, Ed Hartnett

◆ PIOc_Init_Intracomm_from_F90()

int PIOc_Init_Intracomm_from_F90 ( int  f90_comp_comm,
const int  num_iotasks,
const int  stride,
const int  base,
const int  rearr,
rearr_opt_t rearr_opts,
int *  iosysidp 
)

Interface to call from pio_init from fortran.

Parameters
f90_comp_comm
num_iotasksthe number of IO tasks
stridethe stride to use assigning tasks
basethe starting point when assigning tasks
rearrthe rearranger
rearr_optsthe rearranger options
iosysidpa pointer that gets the IO system ID
Returns
0 for success, error code otherwise
Author
Jim Edwards