开发者

Papers on fast validation of UTF-8

开发者 https://www.devze.com 2023-03-16 11:17 出处:网络
Are there any papers on state of the art UTF-8 validators/decoders. I\'ve seen implementations \"i开发者_StackOverflow社区n the wild\" that use clever loops that process up to 8 bytes per iteration in

Are there any papers on state of the art UTF-8 validators/decoders. I've seen implementations "i开发者_StackOverflow社区n the wild" that use clever loops that process up to 8 bytes per iteration in common cases (e.g. all 7-bit ASCII input).


I don't know about papers, it' probably a bit too specific and narrow a subject for strictly scientific analysis but rather an engineering problem. You can start by looking at how this is handled different libraries. Some solutions will use language-specific tricks while others are very general. For Java, you can start with the code of UTF8ByteBufferReader, a part of Javolution. I have found this to be much faster than the character set converters built into the language. I believe (but I'm not sure) that the latter use a common piece of code for many encodings and encoding-specific data files. Javolution in contrast has code designed specifically for UTF-8.

There are also some techniques used for specific tasks, for example if you only need to calculate how many bytes a UTF-8 character takes as you parse the text, you can use a table of 256 values which you index by the first byte of the UTF-8 encoded character and this way of skipping over characters or calculating a string's length in characters is much faster than using bit operations and conditionals.

For some situations, e.g. if you can waste some memory and if you now that most characters you encounter will be from the Basic Multilingual Plane, you could try even more aggressive lookup tables, for example first calculate the length in bytes by the method described above and if it's 1 or 2 bytes (maybe 3 makes sense too), look up the decoded char in a table. Remember, however, to benchmark this and any other algorithm you try, as it need not be faster at all (bit operations are quite fast, and with a big lookup table you loose locality of reference plus the offset calculation isn't completely free, either).

Any way, I suggest you start by looking at the Javolution code or another similar library.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号