通过命令行运行的Python脚本未创建CSV

时间:2018-12-13 16:51:37

标签: python csv export-to-csv

我是Python的新手,目前正在抓取网站以收集库存信息。库存项目分布在网站的6页上。抓取非常顺利,我能够解析出所有我想选择的HTML元素。

我现在将其转到下一步,并尝试使用Python 3中包含的csv.writer将其导出到csv文件中。该脚本在我的命令行中运行,没有弹出任何语法错误,但是csv文件确实没有被创建。我想知道我的脚本是否存在任何明显的问题,或者在尝试将已解析的HTML元素放入csv时可能遗漏的东西。

这是我的代码:

import requests
import csv
from bs4 import BeautifulSoup

main_used_page = 'https://www.necarconnection.com/used-vehicles/'
page = requests.get(main_used_page)
soup = BeautifulSoup(page.text,'html.parser')

def get_items(main_used_page,urls):
    main_site = 'https://www.necarconnection.com/'
    counter = 0
    for x in urls:
        site = requests.get(main_used_page + urls[counter])
        soup = BeautifulSoup(site.content,'html.parser')
        counter +=1
        for item in soup.find_all('li'):
            vehicle = item.find('div',class_='inventory-post')
            image = item.find('div',class_='vehicle-image')
            price = item.find('div',class_='price-top')
            vin = item.find_all('div',class_='vinstock')

            try:
                url = image.find('a')
                link = url.get('href')
                pic_link = url.img
                img_url = pic_link['src']
                if 'gif' in pic_link['src']:img_url = pic_link['data-src']

                landing = requests.get(main_site + link)
                souped = BeautifulSoup(landing_page.content,'html.parser')
                comment = ''




                for comments in souped.find_all('td',class_='results listview'):
                    com = comments.get_text()
                    comment += com



                with open('necc-december.csv','w',newline='') as csv_file:
                    fieldnames = ['CLASSIFICATION','TYPE','PRICE','VIN',
                          'INDEX','LINK','IMG','DESCRIPTION']
                    writer = csv.DictWriter(csv_file,fieldnames=fieldnames)
                    writer.writeheader()
                    writer.writerow({
                        'CLASSIFICATION':vehicle['data-make'],
                        'TYPE':vehicle['data-type'],
                        'PRICE':price,
                        'VIN':vin,
                        'INDEX':vehicle['data-location'],
                        'LINK':link,
                        'IMG':img_url,
                        'DESCRIPTION':comment})

            except TypeError: None
            except AttributeError: None
            except UnboundLocalError: None

urls = ['']
counter = 0
prev = 0

for x in range(100):

    site = requests.get(main_used_page + urls[counter])
    soup = BeautifulSoup(site.content,'html.parser')

    for button in soup.find_all('a',class_='pages'):
        if button['class'] == ['prev']:
            prev +=1

        if button['class'] == ['next']:
            next_url = button.get('href')

        if next_url not in urls:
            urls.append(next_url)
            counter +=1

        if prev - 1 > counter:break


get_items(main_used_page,urls)

以下是通过命令行处理脚本后发生的屏幕截图:

command line return

脚本运行需要一段时间,因此我知道脚本正在读取和处理。我只是不确定这与实际制作csv文件之间出了什么问题。

我希望这会有所帮助。同样,在使用Python 3 csv.writer的任何提示或技巧,我尝试了多种不同的变体后,都会受到赞赏。

1 个答案:

答案 0 :(得分:1)

我发现您编写csv的代码工作正常。这里是孤立的

import csv

vehicle = {'data-make': 'Buick',
           'data-type': 'Sedan',
           'data-location': 'Bronx',
           }
price = '8000.00'
vin = '11040VDOD330C0D0D003'
link = 'https://www.necarconnection.com/someplace'
img_url = 'https://www.necarconnection.com/image/someimage'
comment = 'Fine Car'

with open('necc-december.csv','w',newline='') as csv_file:
    fieldnames = ['CLASSIFICATION','TYPE','PRICE','VIN',
                  'INDEX','LINK','IMG','DESCRIPTION']
    writer = csv.DictWriter(csv_file,fieldnames=fieldnames)
    writer.writeheader()
    writer.writerow({
        'CLASSIFICATION':vehicle['data-make'],
        'TYPE':vehicle['data-type'],
        'PRICE':price,
        'VIN':vin,
        'INDEX':vehicle['data-location'],
        'LINK':link,
        'IMG':img_url,
        'DESCRIPTION':comment})

它可以很好地创建necc-december.csv:

CLASSIFICATION,TYPE,PRICE,VIN,INDEX,LINK,IMG,DESCRIPTION
Buick,Sedan,8000.00,11040VDOD330C0D0D003,Bronx,https://www.necarconnection.com/someplace,https://www.necarconnection.com/image/someimage,Fine Car

我认为问题在于代码未找到任何带有class ='next'的按钮

要运行您的代码,我必须初始化next_url

next_url = None

然后更改您的条件

if next_url not in urls:

If next_url and next_url not in urls:

我在您的for循环中添加了调试功能:

for button in soup.find_all('a',class_='pages'):
    print ('button:', button)

并获得以下输出:

button: <a class="pages current" data-page="1" href="javascript:void(0);">1</a>
button: <a class="pages" data-page="2" href="javascript:void(0);">2</a>
button: <a class="pages" data-page="3" href="javascript:void(0);">3</a>
button: <a class="pages" data-page="4" href="javascript:void(0);">4</a>
button: <a class="pages" data-page="5" href="javascript:void(0);">5</a>
button: <a class="pages" data-page="6" href="javascript:void(0);">6</a>
button: <a class="pages current" data-page="1" href="javascript:void(0);">1</a>
button: <a class="pages" data-page="2" href="javascript:void(0);">2</a>
button: <a class="pages" data-page="3" href="javascript:void(0);">3</a>
button: <a class="pages" data-page="4" href="javascript:void(0);">4</a>
button: <a class="pages" data-page="5" href="javascript:void(0);">5</a>
button: <a class="pages" data-page="6" href="javascript:void(0);">6</a>

因此没有class ='next'的按钮。