开发者

Proccessing 2 million records with perl

开发者 https://www.devze.com 2023-04-13 04:45 出处:网络
I have 2 million records on the database is it possible to bring them all and store them on perl h开发者_开发百科ash reference without any problem of reaching out of memory ?What is your reason to rea

I have 2 million records on the database is it possible to bring them all and store them on perl h开发者_开发百科ash reference without any problem of reaching out of memory ?


What is your reason to read them all into memory? Speed or ease of coding (i.e. treat the whole thing as a hashref).

If its the former, then sure, I think, you just need a ton of ram.

If its the latter, then there are interesting options. For example there are tied interfaces for databases that look like Perl native hashes but in reality query and return data as needed. A quick search of CPAN shows Tie::DBI, Tie::Hash::DBD and several tied interfaces for specific databases, flat-file DBs, and CSV files, including mine Tie::Array::CSV.


On the one hand, processing two million elements in a hash isn't unheard of. However, we don't know how big your records are. At any rate, it sounds like an XY problem. It may not be the best solution for the problem you're facing.

Why not use DBIx::Class so that your tables can be treated like Perl classes (which are themselves glorified data-structures)? There's a ton of documentation at DBIx::Class::Manual::DocMap. This is really what DBIx::Class is all about; letting you abstract away the SQL details of the database and treat it like a series of classes.


That completely depends on how much data your records have. Perl hashes and arrays take up more memory than you'd think although it's not crazy. But again, it totally depends on what your data looks like and how much RAM you have. Perl won't have any problems with it if you have the RAM.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号