我有一个发布者(美国证券交易委员会)断言的文本文件是用UTF-8编码的(https://www.sec.gov/files/aqfs.pdf,第4节)。我正在使用以下代码处理这些行:
def tags(filename):
"""Yield Tag instances from tag.txt."""
with codecs.open(filename, 'r', encoding='utf-8', errors='strict') as f:
fields = f.readline().strip().split('\t')
for line in f.readlines():
yield process_tag_record(fields, line)
我收到以下错误:
Traceback (most recent call last):
File "/home/randm/Projects/finance/secxbrl.py", line 151, in <module>
main()
File "/home/randm/Projects/finance/secxbrl.py", line 143, in main
all_tags = list(tags("tag.txt"))
File "/home/randm/Projects/finance/secxbrl.py", line 109, in tags
content = f.read()
File "/home/randm/Libraries/anaconda3/lib/python3.6/codecs.py", line 698, in read
return self.reader.read(size)
File "/home/randm/Libraries/anaconda3/lib/python3.6/codecs.py", line 501, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 3583587: invalid start byte
鉴于我可能无法回到美国证券交易委员会并告诉他们他们的文件似乎没有用UTF-8编码,我应该如何调试并捕获此错误?
我尝试了什么
我做了一个文件的hexdump,发现有问题的文字是“非招募投资的补充披露”。如果我将有问题的字节解码为十六进制代码点(即“U + 00AD”),则它在上下文中是有意义的,因为它是软连字符。但以下似乎不起作用:
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> b"\x41".decode("utf-8")
'A'
>>> b"\xad".decode("utf-8")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec cant decode byte 0xad in position 0: invalid start byte
>>> b"\xc2ad".decode("utf-8")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec cant decode byte 0xc2 in position 0: invalid continuation byte
我使用了errors='replace'
,似乎已经过去了。但我想了解如果我尝试将其插入数据库会发生什么。
编辑添加hexdump:
0036ae40 31 09 09 09 09 53 55 50 50 4c 45 4d 45 4e 54 41 |1....SUPPLEMENTA|
0036ae50 4c 20 44 49 53 43 4c 4f 53 55 52 45 20 4f 46 20 |L DISCLOSURE OF |
0036ae60 4e 4f 4e ad 43 41 53 48 20 49 4e 56 45 53 54 49 |NON.CASH INVESTI|
0036ae70 4e 47 20 41 4e 44 20 46 49 4e 41 4e 43 49 4e 47 |NG AND FINANCING|
0036ae80 20 41 43 54 49 56 49 54 49 45 53 3a 09 0a 50 72 | ACTIVITIES:..Pr|
答案 0 :(得分:6)
您的数据文件已损坏。如果该字符真的是U+00AD SOFT HYPHEN,那么你错过了一个0xC2字节:
>>> '\u00ad'.encode('utf8')
b'\xc2\xad'
在以0xAD结尾的所有可能的UTF-8编码中,软连字符确实最有意义。但是,它表示可能缺少其他字节的数据集。你碰巧遇到了重要的事情。
我将返回此数据集的来源,并在下载时验证该文件是否已损坏。否则,使用error='replace'
是一种可行的解决方法,前提是没有分隔符(标签,换行符等)丢失。
另一种可能性是SEC确实对文件使用了不同的编码;例如,在Windows代码页1252和Latin-1中,0xAD
是软连字符的正确编码。事实上,当我下载same dataset directly (warning, large ZIP file linked)并打开tags.txt
时,我无法将数据解码为UTF-8:
>>> open('/tmp/2017q1/tag.txt', encoding='utf8').read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 3583587: invalid start byte
>>> from pprint import pprint
>>> f = open('/tmp/2017q1/tag.txt', 'rb')
>>> f.seek(3583550)
3583550
>>> pprint(f.read(100))
(b'1\t1\t\t\t\tSUPPLEMENTAL DISCLOSURE OF NON\xadCASH INVESTING AND FINANCING A'
b'CTIVITIES:\t\nProceedsFromSaleOfIn')
文件中有两个非ASCII字符:
>>> f.seek(0)
0
>>> pprint([l for l in f if any(b > 127 for b in l)])
[b'SupplementalDisclosureOfNoncashInvestingAndFinancingActivitiesAbstract\t0'
b'001654954-17-000551\t1\t1\t\t\t\tSUPPLEMENTAL DISCLOSURE OF NON\xadCASH I'
b'NVESTING AND FINANCING ACTIVITIES:\t\n',
b'HotelKranichhheMember\t0001558370-17-001446\t1\t0\tmember\tD\t\tHotel Krani'
b'chhhe [Member]\tRepresents information pertaining to Hotel Kranichh\xf6h'
b'e.\n']
解码为Latin-1的 Hotel Kranichh\xf6he
为Hotel Kranichhöhe。
文件中还有几个0xC1 / 0xD1对:
>>> f.seek(0)
0
>>> quotes = [l for l in f if any(b in {0x1C, 0x1D} for b in l)]
>>> quotes[0].split(b'\t')[-1][50:130]
b'Temporary Payroll Tax Cut Continuation Act of 2011 (\x1cTCCA\x1d) recognized during th'
>>> quotes[1].split(b'\t')[-1][50:130]
b'ributory defined benefit pension plan (the \x1cAetna Pension Plan\x1d) to allow certai'
我认为这些字符真的是U+201C LEFT DOUBLE QUOTATION MARK和U+201D RIGHT DOUBLE QUOTATION MARK个字符;请注意1C
和1D
部分。它几乎感觉好像他们的编码器采用了UTF-16并剥离了所有高字节,而不是正确编码为UTF-8!
没有使用Python的编解码器可以将'\u201C\u201D'
编码为b'\x1C\x1D'
,这使得SEC更有可能在其某处篡改编码过程。事实上,还有0x13和0x14字符可能是 en 和 em 破折号(U+2013和U+2014),以及0x19字节几乎可以肯定是单引号(U+2019)。完成图片所缺少的只是一个0x18字节来表示U+2018。
如果我们假设编码被破坏,我们可以尝试修复。以下代码将读取文件并修复引号问题,假设其余数据除了引号之外不使用Latin-1之外的字符:
_map = {
# dashes
0x13: '\u2013', 0x14: '\u2014',
# single quotes
0x18: '\u2018', 0x19: '\u2019',
# double quotes
0x1c: '\u201c', 0x1d: '\u201d',
}
def repair(line, _map=_map):
"""Repair mis-encoded SEC data. Assumes line was decoded as Latin-1"""
return line.translate(_map)
然后将其应用于您阅读的行:
with open(filename, 'r', encoding='latin-1') as f:
repaired = map(repair, f)
fields = next(repaired).strip().split('\t')
for line in repaired:
yield process_tag_record(fields, line)
另外,解决你发布的代码,你正在使Python比你需要的更加努力。不要使用codecs.open()
;这个遗留代码已知问题并且比新的Python 3 I / O层慢。只需使用open()
即可。不要使用f.readlines()
;您不需要在此处将整个文件读入列表。直接迭代文件:
def tags(filename):
"""Yield Tag instances from tag.txt."""
with open(filename, 'r', encoding='utf-8', errors='strict') as f:
fields = next(f).strip().split('\t')
for line in f:
yield process_tag_record(fields, line)
如果process_tag_record
也在选项卡上拆分,请使用csv.reader()
对象并避免手动拆分每一行:
import csv
def tags(filename):
"""Yield Tag instances from tag.txt."""
with open(filename, 'r', encoding='utf-8', errors='strict') as f:
reader = csv.reader(f, delimiter='\t')
fields = next(reader)
for row in reader:
yield process_tag_record(fields, row)
如果process_tag_record
将fields
列表与row
中的值组合在一起形成字典,请改用csv.DictReader()
:
def tags(filename):
"""Yield Tag instances from tag.txt."""
with open(filename, 'r', encoding='utf-8', errors='strict') as f:
reader = csv.DictReader(f, delimiter='\t')
# first row is used as keys for the dictionary, no need to read fields manually.
yield from reader