刮刮警告后超出网址限制

时间:2019-05-16 08:37:24

标签: python scrapy xlsxwriter

在完成了抓蜘蛛程序后,我得到了这个,程序正在使用 openpyxl xlsxwriter将数据写入excel文件,但是在警告中说xlsx.writer(我不使用),这是一个问题,原因是某些数据未写入并被跳过。 这是代码的原理:

import scrapy,csv,requests
import re,json
from openpyxl import Workbook
import numpy as np
import pandas as pd
from json.decoder import JSONDecodeError
from openpyxl.utils.dataframe import dataframe_to_rows
#spidercode 
df = pd.DataFrame(spider.list_of_items)
df.to_excel("{}.xlsx".format(file_name))

2019-05-16 10:50:07 [scrapy.core.engine] INFO: Spider closed (finished)
2019-05-16 10:50:15 [py.warnings] WARNING: C:\Users\test\AppData\Local\Programs\Python\Python37-32\lib\site-packages\xlsxwriter\worksheet.py:915:
UserWarning: Ignoring URL 'https://www.target.com/p/nfl-indianapolis-colts-northwest-draft-full-queen-comforter-set/-/A-53033602?ref=tgt_soc_0000059195_pd&afid=pin_ao&cpng=DR_PSA_Sports&fndsrc=bcm&campaignid=626738629371&adgroupid=2680061765888&product_partition_id=2954942580838&device=m&pp=1' 
with link or location/anchor > 255 characters since it exceeds Excel's limit for URLs force_unicode(url))

我想要的是对此问题的一种解决方法,或者是一种方法,如果此警告至少发生在该行的其余部分,则可以编写没有URL的行。

1 个答案:

答案 0 :(得分:1)

您的网址(266个符号):'https://www.target.com/p/nfl-indianapolis-colts-northwest-draft-full-queen-comforter-set/-/A-53033602?ref=tgt_soc_0000059195_pd&afid=pin_ao&cpng=DR_PSA_Sports&fndsrc=bcm&campaignid=626738629371&adgroupid=2680061765888&product_partition_id=2954942580838&device=m&pp=1'

包括2个部分:

如果查询参数数据没有任何实用价值-您可以将其与原始url切断,并避免excel 255个符号链接限制:

....
#your spidercode 
for item in spider.list_of_items:
    #url = item[url_index] # if each item is a list or tuple
    #url = item[url] # if each item is a dict
    if "?" in url:
        url = url.split("?")[0]
df = pd.DataFrame(spider.list_of_items)
df.to_excel("{}.xlsx".format(file_name))
相关问题