开发者

Bulk ingest into Redis

开发者 https://www.devze.com 2023-04-06 07:24 出处:网络
I\'m trying to load a large piece of data into Redis as fast as possible. My data looks like: 77开发者_如何学运维1240491921 SOME;STRING;ABOUT;THIS;LENGTH

I'm trying to load a large piece of data into Redis as fast as possible.

My data looks like:

77开发者_如何学运维1240491921 SOME;STRING;ABOUT;THIS;LENGTH
345928354912 SOME;STRING;ABOUT;THIS;LENGTH

There is a ~12 digit number on the left and a variable length string on the right. The key is going to be the number on the left and the data is going to be the string on the right.

In my Redis instance that I just installed out of the box and with an uncompressed plain text file with this data, I can get about a million records into it a minute. I need to do about 45 million, which would take about 45 minutes. 45 minutes is too long.

Are there some standard performance tweaks that exist for me to do this type of optimization? Would I get better performance by sharding across separate instances?


The fastest way to do this is the following: generate Redis protocol out of this data. The documentation to generate the Redis protocol is on the Redis.io site, it is a trivial protocol. Once you have that, just call it appendonly.log and start redis in append only mode.

You can even do a FLUSHALL command and finally push the data into your server with netcat, redirecting the output to /dev/null.

This will be super fast, there is no RTT to wait, it's just a bulk loading of data.

Less hackish way, just insert things 1000 per time using pipelining. It's almost as fast as generating the protocol, but much more clean :)


I like what Salvadore proposed, but here you are one more very clear way - generate feed for cli, e.g.

SET xxx yyy
SET xxx yyy
SET xxx yyy

pipe it into cli on server close to you. Then do save, shutdown and move data file to the destination server.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号