开发者

Sending data to randomly selected hosts by using MPI

开发者 https://www.devze.com 2023-04-03 09:12 出处:网络
I have 41 computers that used MPI on the same local area network. MPI works good on these machines without any problem. I want to use one of them for sending a float number to the other 40 computers b

I have 41 computers that used MPI on the same local area network. MPI works good on these machines without any problem. I want to use one of them for sending a float number to the other 40 computers by selecting randomly. I mean that the main distributor computer will randomly select a host and send a float number to it. This process will be performed repeatedly.These 40 hosts will use these float numbers for their calculations. The random selection is needed for some "heuristic optimization" reasons. Thus, for sending a float number, some hosts may be selected frequently, some hosts may be selected rarely (may be never selected).

I tried to understand the blocking and nonblocking communication by reading the documents and using the examples. As a result, I saw that I cannot use MPI_Send and MPI_Recv for a randomly开发者_如何学C selection as I mentioned. Because, the receiver hosts have to wait for the sending process of distributor computer without doing any benefit calculation as a nature of their blocking model. MPI_ISend and MPI_IRecv may be useful but I could not find a way. Because the example programs which I found mostly used MPI_Wait. Eventually these programs also wait for the data from distributer computer without doing anything. My hosts must check for the message but if there is no message, it must continue its own calculations with initial float number values or the values which is previously received.

How can I do it? At least, which functions can be used for this purpose.

Thanks for reading


MPI_Test is what you're looking for. It will poll a nonblocking receive initiated by MPI_Irecv, and return immediately even if the communication didn't complete yet. You can check the MPI_Status object flag parameter after the call to see whether a new message was received or not, and branch accordingly.


As suszterpatt mentioned, MPI_Test is a good way to solve this problem.

Alternatively, you could use MPI_Iprobe without posting the message ahead of time, if for some reason you can't figure out what the size/shape of the message is until after it has been sent to you. In general MPI_Irecv+MPI_Test is preferable to MPI_Iprobe. Also if you are writing multithreaded code then MPI_Iprobe may be entirely unusable.


If the potential refresh of the random number will be done at "regular intervals" to all the ranks...then a collective operation might be a good fit.

MPI_Scatter in particular allows each rank to receive a different value. This can be used to distribute the random number, provided that some "control" number can be established (0, MAXFOAT, or something similar). MPI_Scatter can allow each rank to receive more than one number (see recvcount argument). This can be used to send pairs of numbers to each rank. Some pattern can be established to transmit both a "flag" and a "value" to each rank. For instance, if the first number is positive, then use the second number as a new seed...if the first number is negative, then continue to use the last seed.

Alternatively, the MPI_Scatter can be used to distribute a flag that will allow the specific rank that will receive the random number to execute a specific blocking recv. This will help to ease the cleanup of the job and avoid having all ranks clean up an unmatched MPI_Recv.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号