parallel processing - MPI_Send is blocking in ring communication with large data size -
i trying form ring communication using mpi, each process sending result next process , last process sending result 0th process. lets assume have 4 processes 0th process send result 1st, 1st 2nd, 2nd 3rd , 3rd 0th.
#include "mpi.h" #include <stdio.h> #include<stdlib.h> #define nelem 1000 int main (int argc, char *argv[]) { int numtasks, rank, rc, i, dest = 1, tag = 111, source = 0, size; double *data, result; void *buffer; data=(double*)malloc(sizeof(double)*nelem); if(data==null) { printf("unable allocate memory\n"); return; } mpi_status status; mpi_init (&argc, &argv); mpi_comm_size (mpi_comm_world, &numtasks); mpi_comm_rank (mpi_comm_world, &rank); (i = 0; < nelem; i++) data[i] = (double) random (); if (rank == 0) source=numtasks-1; else source=rank-1; if(rank==numtasks-1) dest=0; else dest=rank+1; printf("rank %d sending data rank %d\n",rank,dest); mpi_send(data, nelem, mpi_double, dest, tag,mpi_comm_world); printf("rank %d send complete\n",rank); printf("rank %d receiving data rank %d\n",rank,source); mpi_recv (data, nelem, mpi_double, source, tag, mpi_comm_world,&status); printf("rank %d received data rank %d\n",rank,source); mpi_finalize (); }
here nelem number of elements send or received. if send less 100 elements 4 threads above code work fine if increase number of thread blocked. not getting why getting blocked. there restriction on data size can send.
thanks
ajay
all of processes trying send. cannot none of them ready listen.
for smaller element sizes, expect message fits inside buffer.
as jonathan suggests, answer use mpi_sendrecv(), or non-blocking communications.
Comments
Post a Comment