MPI_Intercomm_create

Creates an intercommuncator from two intracommunicators

Synopsis

#include "mpi.h"
int MPI_Intercomm_create ( local_comm, local_leader, peer_comm, 
                           remote_leader, tag, comm_out )
MPI_Comm  local_comm;
int       local_leader;
MPI_Comm  peer_comm;
int       remote_leader;
int       tag;
MPI_Comm *comm_out;

Input Paramters

local_comm Local (intra)communicator
local_leader Rank in local_comm of leader (often 0)
peer_comm Remote communicator
remote_leader Rank in peer_comm of remote leader (often 0)
tag Message tag to use in constructing intercommunicator; if multiple MPI_Intercomm_creates are being made, they should use different tags (more precisely, ensure that the local and remote leaders are using different tags for each MPI_intercomm_create).

Output Parameter

comm_out
Created intercommunicator

Notes

The MPI 1.1 Standard contains two mutually exclusive comments on theinput intracommunicators. One says that their repective groups must bedisjoint; the other that the leaders can be the same process. Aftersome discussion by the MPI Forum, it has been decided that the groups mustbe disjoint. Note that the reason given for this in the standard isnot the reason for this choice; rather, the other operations onintercommunicators (like MPI_Intercomm_merge) do not make sense if thegroups are not disjoint.

Notes for Fortran

All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK) havean additional argument ierr at the end of the argument list. ierris an integer and has the same meaning as the return value of the routinein C. In Fortran, MPI routines are subroutines, and are invoked with thecall statement.

All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGERin Fortran.

Algorithm

1) Allocate a send context, an inter
coll context, and an intra-coll context
2) Send "send_context" and lrank_to_grank list from local comm group
if I'm the local_leader.
3) If I'm the local leader, then wait on the posted sends and receives
to complete. Post the receive for the remote group information and wait for it to complete.
4) Broadcast information received from the remote leader.
. 5) Create the inter_communicator from the information we now have.
An inter
communicator ends up with three levels of communicators. The inter-communicator returned to the user, a "collective" inter-communicator that can be used for safe communications between local & remote groups, and a collective intra-communicator that can be used to allocate new contexts during the merge and dup operations.

For the resulting inter-communicator, comm_out

       comm_out                       = inter-communicator
       comm_out->comm_coll            = "collective" inter-communicator
       comm_out->comm_coll->comm_coll = safe collective intra-communicator

Errors

All MPI routines (except MPI_Wtime and MPI_Wtick) return an error value; C routines as the value of the function and Fortran routines in the lastargument. Before the value is returned, the current MPI error handler iscalled. By default, this error handler aborts the MPI job. The error handlermay be changed with MPI_Errhandler_set; the predefined error handlerMPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarentee that an MPI program can continue pastan error.

MPI_SUCCESS
No error; MPI routine completed successfully.
MPI_ERR_COMM
Invalid communicator. A common error is to use a null communicator in a call (not even allowed in MPI_Comm_rank).
MPI_ERR_TAG
Invalid tag argument. Tags must be non-negative; tags in a receive (MPI_Recv, MPI_Irecv, MPI_Sendrecv, etc.) may also be MPI_ANY_TAG. The largest tag value is available through the the attribute MPI_TAG_UB.
MPI_ERR_INTERN
This error is returned when some part of the MPICH implementation is unable to acquire memory.
MPI_ERR_RANK
Invalid source or destination rank. Ranks must be between zero and the size of the communicator minus one; ranks in a receive (MPI_Recv, MPI_Irecv, MPI_Sendrecv, etc.) may also be MPI_ANY_SOURCE.

See Also

MPI_Intercomm_merge, MPI_Comm_free, MPI_Comm_remote_group,
MPI_Comm_remote_size

Location:ic_create.c