Basic operations:
(Much of what follows is adapted from Bill Gropp’s material.)
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Hello from %d of %d\n", rank, size);
MPI_Finalize();
return 0;
}
MPI_COMM_WORLD
Need to specify:
Message is (address, count, datatype). Allow:
MPI_INT
, MPI_DOUBLE
)Complex data types may hurt performance?
Use an integer tag to label messages
MPI_ANY_TAG
is a wildcardBasic blocking point-to-point communication:
int
MPI_Send(void *buf, int count,
MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm);
int
MPI_Recv(void *buf, int count,
MPI_Datatype datatype,
int source, int tag, MPI_Comm comm,
MPI_Status *status);
MPI_ANY_SOURCE
and MPI_ANY_TAG
are wildcardsProcess 0:
for i = 1:ntrials
send b bytes to 1
recv b bytes from 1
end
Process 1:
for i = 1:ntrials
recv b bytes from 0
send b bytes to 0
end
void ping(char* buf, int n, int ntrials, int p)
{
for (int i = 0; i < ntrials; ++i) {
MPI_Send(buf, n, MPI_CHAR, p, 0,
MPI_COMM_WORLD);
MPI_Recv(buf, n, MPI_CHAR, p, 0,
MPI_COMM_WORLD, NULL);
}
}
(Pong is similar)
for (int sz = 1; sz <= MAX_SZ; sz += 1000) {
if (rank == 0) {
clock_t t1, t2;
t1 = clock();
ping(buf, sz, NTRIALS, 1);
t2 = clock();
printf("%d %g\n", sz,
(double) (t2-t1)/CLOCKS_PER_SEC);
} else if (rank == 1) {
pong(buf, sz, NTRIALS, 0);
}
}
On my laptop (OpenMPI)
mpicc -std=c99 pingpong.c -o pingpong.x
mpirun -np 2 ./pingpong.x
Details vary, but this is pretty normal.
Approximate α-β parameters (Macbook with OpenMPI)
Can write a lot of MPI code with 6 operations we’ve seen:
MPI_Init
MPI_Finalize
MPI_Comm_size
MPI_Comm_rank
MPI_Send
MPI_Recv
... but there are sometimes better ways.
Next time: non-blocking and collective operations!