开发者

realtime querying/aggregating millions of records - hadoop? hbase? cassandra?

开发者 https://www.devze.com 2023-03-23 14:58 出处:网络
I have a solution that can be parallelized, but I don\'t (yet) have experience with hadoop/nosql, and I\'m not sure which solution is best for my needs.In theory, if I had unlimited CPUs, my results s

I have a solution that can be parallelized, but I don't (yet) have experience with hadoop/nosql, and I'm not sure which solution is best for my needs. In theory, if I had unlimited CPUs, my results should return back instantaneously. So, any help would be appreciated. Thanks!

Here's what I have:

  • 1000s of datasets
  • dataset keys:
    • all datasets have the same keys
    • 1 million keys (this may later be 10 or 20 million)
  • dataset columns:

    • each dataset has the same columns
    • 10 to 20 columns
    • most columns are numerical values for which we need to aggregate on (avg, stddev, and use R to calculate statistics)
    • a few columns are "type_id" columns, since in a particular query we may want to only include certain type_ids
  • web application

    • user can choose which datasets they are interested in (anywhere from 15 to 1000)
    • application needs to present: key, and aggregated results (avg, stddev) of each column
  • updates of data:
    • an entire dataset can be added, dropped, or replaced/updated
    • would be cool to be able to add columns. But, if required, can just replace the entire dataset.
    • never add rows/keys to a dataset - so don't need a system with lots of fast writes
  • infrastructure:

    • currently two machines with 24 cores each
    • eventually, want ability to also run this on amazon

I can't precompute my aggregated values, but since each key is independent, this should be easily scalable. Currently, I have this data in a postgres database, where each dataset is in its own partition.

  • partitions are nice, since can easily add/drop/replace partitions
  • database is nice for filtering based on type_id
  • databases aren't easy for writing parallel queries
  • databases are good for structured data, and my data is not structured

As a proof of concept I tried out hadoop:

  • created a tab separated file per dataset for a particular type_id
  • uploaded to hdfs
  • map: retrieved a value/column for each key
  • reduce: computed average and standard deviation

From my crude proof-of-concept, I can see this will scale nicely, but I can see hadoop/hdfs has latency I've read that that it's generally not used for real time querying (even though I'm ok with returning results back to users in 5 seconds).

Any suggestion on how I should approach this? I was thinking of trying HBase next to get a feel for th开发者_Python百科at. Should I instead look at Hive? Cassandra? Voldemort?

thanks!


Hive or Pig don't seem like they would help you. Essentially each of them compiles down to one or more map/reduce jobs, so the response cannot be within 5 seconds

HBase may work, although your infrastructure is a bit small for optimal performance. I don't understand why you can't pre-compute summary statistics for each column. You should look up computing running averages so that you don't have to do heavy weight reduces.

check out http://en.wikipedia.org/wiki/Standard_deviation

stddev(X) = sqrt(E[X^2]- (E[X])^2)

this implies that you can get the stddev of AB by doing

sqrt(E[AB^2]-(E[AB])^2). E[AB^2] is (sum(A^2) + sum(B^2))/(|A|+|B|)


Since your data seems to be pretty much homogeneous, I would definitely take a look at Google BigQuery - You can ingest and analyze the data without a MapReduce step (on your part), and the RESTful API will help you create a web application based on your queries. In fact, depending on how you want to design your application, you could create a fairly 'real time' application.


It is serious problem without immidiate good solution in the open source space. In commercial space MPP databases like greenplum/netezza should do. Ideally you would need google's Dremel (engine behind BigQuery). We are developing open source clone, but it will take some time... Regardless of the engine used I think solution should include holding the whole dataset in memory - it should give an idea what size of cluster you need.


If I understand you correctly and you only need to aggregate on single columns at a time You can store your data differently for better results in HBase that would look something like table per data column in today's setup and another single table for the filtering fields (type_ids) row for each key in today's setup - you may want to think how to incorporate your filter fields into the key for efficient filtering - otherwise you'd have to do a two phase read ( column for each table in today's setup (i.e. few thousands of columns) HBase doesn't mind if you add new columns and is sparse in the sense that it doesn't store data for columns that don't exist. When you read a row you'd get all the relevant value which you can do avg. etc. quite easily


You might want to use a plain old database for this. It doesn't sound like you have a transactional system. As a result you can probably use just one or two large tables. SQL has problems when you need to join over large data. But since your data set doesn't sound like you need to join, you should be fine. You can have the indexes setup to find the data set and the either do in SQL or in app math.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号