开发者

caching streaming-batch join calcs in spark structured streaming

开发者 https://www.devze.com 2022-12-07 21:59 出处:网络
I\'m curious whether there is a way (or if it is optimal) to cache the result of a calculation carried out on a stream-batch join in spark Structured Streaming.

I'm curious whether there is a way (or if it is optimal) to cache the result of a calculation carried out on a stream-batch join in spark Structured Streaming.

For example, I want to join a stream with a batch dataframe, then do a window calculation on the joined data, then output the result to two sinks (see the image below). If the in开发者_Python百科itial data were two batch dataframes, then I would consider df.cache()'ing the result to make sure that the calculation doesn't run twice (once for each sink).

But in the streaming-batch join context, can/should you cache the result of the window calculation? Is there an advantage? A disadvantage? Or is there a better way to approach this?

caching streaming-batch join calcs in spark structured streaming

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号