I'm curious whether there is a way (or if it is optimal) to cache the result of a calculation carried out on a stream-batch join in spark Structured Streaming.
For example, I want to join a stream with a batch dataframe, then do a window calculation on the joined data, then output the result to two sinks (see the image below). If the in开发者_Python百科itial data were two batch dataframes, then I would consider df.cache()
'ing the result to make sure that the calculation doesn't run twice (once for each sink).
But in the streaming-batch join context, can/should you cache
the result of the window calculation? Is there an advantage? A disadvantage? Or is there a better way to approach this?
精彩评论