开发者

how to reliably decode various encodings to system default encoding

开发者 https://www.devze.com 2023-01-23 00:59 出处:网络
I am trying to work with several documents that all have various encodings - some utf-8, some ISO-8859-2, some ascii etc. Is there a reliable way of decoding to a standard encoding for processing?

I am trying to work with several documents that all have various encodings - some utf-8, some ISO-8859-2, some ascii etc. Is there a reliable way of decoding to a standard encoding for processing?

I have tried the following:

import chardet
encoding = chardet.detect(text)
text = unicode(text,encoding['encoding']).decode(sys.getdefaultencoding(),'ignore')

With the above code I still get UnicodeEncodeError error开发者_开发问答s


Use decode to convert bytes to unicode, and encode to convert unicode to bytes:

text.decode(encoding['encoding'], 'ignore').encode(sys.getdefaultencoding(), 'ignore')

Although I would recommend doing your processing on the unicode objects themselves, or UTF-8 encoded strings if you absolutely need to work with bytes. sys.getdefaultencoding() is 'ascii', which provides a very limited character set. See also: http://wiki.python.org/moin/DefaultEncoding


You probably mean encode:

u = unicode(text, encoding['encoding'], 'ignore')
text = u.encode(sys.getdefaultencoding(), 'ignore')

or equivalently and more commonly,

u = text.decode(encoding['encoding'], 'ignore')
text = u.encode(sys.getdefaultencoding(), 'ignore')

You may want ignore on both, as above: the incoming text may have invalid characters in it, causing it to fail to decode to Unicode, and it may have characters which can't be represented in the default encoding, causing it to fail to encode. (You may not actually want to ignore errors, though, since it looks like you were just trying to work around using the wrong function.)

0

精彩评论

暂无评论...
验证码 换一张
取 消