DAY10:阅读CUDA异步并发执行中的Streams

  • 3 replies
  • 3962 views
*

sisiy

  • *****
  • 246
    • 查看个人资料
DAY10:阅读CUDA异步并发执行中的Streams
« 于: 五月 13, 2018, 12:01:11 pm »
本文共计263个字,阅读时间10分钟
3.2.5.5.6. CallbacksThe runtime provides a way to insert【插入】 a callback at any point into a stream via cudaStreamAddCallback(). A callback is a function that is executed on the host once all commands issued to the stream before the callback have completed. Callbacks in stream 0 are executed once all preceding tasks and commands issued in all streams before the callback have completed.The following code sample adds the callback function MyCallback to each of two streams after issuing a host-to-device memory copy, a kernel launch and a device-to-host memory copy into each stream. The callback will begin execution on the host after each of the device-to-host memory copies completes.The commands that are issued in a stream (or all commands issued to any stream if the callback is issued to stream 0) after a callback do not start executing before the callback has completed. The last parameter of cudaStreamAddCallback() is reserved for future use.A callback must not make CUDA API calls (directly or indirectly), as it might end up waiting on itself if it makes such a call leading to a deadlock.3.2.5.5.7. Stream PrioritiesThe relative priorities of streams can be specified at creation using cudaStreamCreateWithPriority(). The range of allowable priorities, ordered as [ highest priority【优先级】, lowest priority ] can be obtained【获得】 using the cudaDeviceGetStreamPriorityRange() function. At runtime, as blocks in low-priority schemes finish, waiting blocks in higher-priority streams are scheduled in their place.The following code sample obtains the allowable range of priorities for the current device, and creates streams with the highest and lowest available priorities

*

sisiy

  • *****
  • 246
    • 查看个人资料
(无标题)
« 回复 #1 于: 五月 13, 2018, 05:54:02 pm »
本文备注/经验分享:
A callback must not make CUDA API calls (directly or indirectly), as it might end up waiting on itself if it makes such a call leading to a deadlock.
回调函数不能调用任何CUDA API函数,无论是直接的,还是间接的调用。因为如果在回调函数中这样做了,调用CUDA函数的回调函数将自己等待自己,造成死锁。其实这很显然的,流中的下一个任务将需要等待流中的之前任务完成才能继续,因为CUDA Stream是顺序执行的, 而如果你一个流中的某回调函数,继续给某流发布了一个任务,很有可能该回调函数永远也等待不完下一个任务完成,因为等待下一个任务完成首先需要这回调函数先结束,而回调函数却在等待下一个任务完成....于是就死锁了。

(无标题)
« 回复 #2 于: 五月 14, 2018, 10:13:38 am »
回调函数是啥

*

sisiy

  • *****
  • 246
    • 查看个人资料
(无标题)
« 回复 #3 于: 五月 14, 2018, 12:15:06 pm »
回调函数是啥

回调函数顾名思义,是由被调用的CUDA Runtime,反过来调用的,用户的函数。用户将自己的函数提供给Runtime,然后Runtime则在恰当的实际,调用用户的函数。
因为这个行为和常规的用户---> Runtime的行为相反,是Runtime ---> 用户的,
所以叫回调(call back)