开发者

How do I find strings and string patterns in a set of many files?

开发者 https://www.devze.com 2023-04-07 06:03 出处:网络
I have a collection of about two million text files, that total about 10GB uncompressed. I would like to find documents containing phrases in this collection, which look like \"every time\" or \"bill

I have a collection of about two million text files, that total about 10GB uncompressed. I would like to find documents containing phrases in this collection, which look like "every time" or "bill clinton" (simple case-insensitive string matching). I would also like to find phrases with fuzzy contents; e.g. "for * weeks".

I've tried indexing with Lucene but it is no good at finding phrases containing stopwords, as these are removed at index time by default. xargs and grep are a slow solution. What's fast and appropriate for开发者_如何学C this amount of data?


You may want to check out the ugrep utility for fuzzy search, which is much faster than agrep:

ugrep -i -Z PATTERN ...

This runs multiple threads (typically 8 or more) to search files concurrently. Option -i is for case-insensitive search and -Z specifies fuzzy search. You can increase the fuzziness from 1 to 3 with -Z3 to allow up to 3 errors (max edit distance 3) or only allow up to 3 insertions (extra characters) with -Z+3 for example. Unicode regex matching is supported by default. For example for fuzzy-matches für (i.e. one substitution).


you could use a postgreSQL datbase. There is full text search implementation and by using dictionaries you can define your own stop words. I don't know if it helps much, but I would give it a try.

0

精彩评论

暂无评论...
验证码 换一张
取 消