通过命令行运行的Python脚本未创建CSV

萨布斯

我是Python的新手,目前正在抓取网站以收集库存信息。库存项目分布在网站的6页上。抓取非常顺利,我能够解析出所有我想选择的HTML元素。

我现在将其转到下一步,并尝试使用Python 3中包含的csv.writer将其导出到csv文件中。该脚本在我的命令行中运行,没有出现任何语法错误,但未创建csv文件。我想知道我的脚本是否存在任何明显的问题,或者在尝试将解析的HTML元素放入csv时可能遗漏的问题。

这是我的代码:

import requests
import csv
from bs4 import BeautifulSoup

main_used_page = 'https://www.necarconnection.com/used-vehicles/'
page = requests.get(main_used_page)
soup = BeautifulSoup(page.text,'html.parser')

def get_items(main_used_page,urls):
    main_site = 'https://www.necarconnection.com/'
    counter = 0
    for x in urls:
        site = requests.get(main_used_page + urls[counter])
        soup = BeautifulSoup(site.content,'html.parser')
        counter +=1
        for item in soup.find_all('li'):
            vehicle = item.find('div',class_='inventory-post')
            image = item.find('div',class_='vehicle-image')
            price = item.find('div',class_='price-top')
            vin = item.find_all('div',class_='vinstock')

            try:
                url = image.find('a')
                link = url.get('href')
                pic_link = url.img
                img_url = pic_link['src']
                if 'gif' in pic_link['src']:img_url = pic_link['data-src']

                landing = requests.get(main_site + link)
                souped = BeautifulSoup(landing_page.content,'html.parser')
                comment = ''




                for comments in souped.find_all('td',class_='results listview'):
                    com = comments.get_text()
                    comment += com



                with open('necc-december.csv','w',newline='') as csv_file:
                    fieldnames = ['CLASSIFICATION','TYPE','PRICE','VIN',
                          'INDEX','LINK','IMG','DESCRIPTION']
                    writer = csv.DictWriter(csv_file,fieldnames=fieldnames)
                    writer.writeheader()
                    writer.writerow({
                        'CLASSIFICATION':vehicle['data-make'],
                        'TYPE':vehicle['data-type'],
                        'PRICE':price,
                        'VIN':vin,
                        'INDEX':vehicle['data-location'],
                        'LINK':link,
                        'IMG':img_url,
                        'DESCRIPTION':comment})

            except TypeError: None
            except AttributeError: None
            except UnboundLocalError: None

urls = ['']
counter = 0
prev = 0

for x in range(100):

    site = requests.get(main_used_page + urls[counter])
    soup = BeautifulSoup(site.content,'html.parser')

    for button in soup.find_all('a',class_='pages'):
        if button['class'] == ['prev']:
            prev +=1

        if button['class'] == ['next']:
            next_url = button.get('href')

        if next_url not in urls:
            urls.append(next_url)
            counter +=1

        if prev - 1 > counter:break


get_items(main_used_page,urls)

这是通过命令行处理脚本后发生的屏幕快照:

命令行返回

该脚本需要一段时间才能运行,因此我知道该脚本正在读取和处理。我只是不确定这与实际制作csv文件之间出了什么问题。

我希望这可以帮到你。同样,在使用Python 3 csv.writer时,任何技巧或窍门都将受到赞赏,因为我尝试了多种不同的变体。

链接

我发现您编写csv的代码工作正常。这里是孤立的

import csv

vehicle = {'data-make': 'Buick',
           'data-type': 'Sedan',
           'data-location': 'Bronx',
           }
price = '8000.00'
vin = '11040VDOD330C0D0D003'
link = 'https://www.necarconnection.com/someplace'
img_url = 'https://www.necarconnection.com/image/someimage'
comment = 'Fine Car'

with open('necc-december.csv','w',newline='') as csv_file:
    fieldnames = ['CLASSIFICATION','TYPE','PRICE','VIN',
                  'INDEX','LINK','IMG','DESCRIPTION']
    writer = csv.DictWriter(csv_file,fieldnames=fieldnames)
    writer.writeheader()
    writer.writerow({
        'CLASSIFICATION':vehicle['data-make'],
        'TYPE':vehicle['data-type'],
        'PRICE':price,
        'VIN':vin,
        'INDEX':vehicle['data-location'],
        'LINK':link,
        'IMG':img_url,
        'DESCRIPTION':comment})

它会很好地创建necc-december.csv:

CLASSIFICATION,TYPE,PRICE,VIN,INDEX,LINK,IMG,DESCRIPTION
Buick,Sedan,8000.00,11040VDOD330C0D0D003,Bronx,https://www.necarconnection.com/someplace,https://www.necarconnection.com/image/someimage,Fine Car

我认为问题是代码没有找到任何带有class ='next'的按钮

要运行您的代码,我必须初始化next_url

next_url = None

然后从

if next_url not in urls:

If next_url and next_url not in urls:

我在for循环中添加了debug:

for button in soup.find_all('a',class_='pages'):
    print ('button:', button)

并得到以下输出:

button: <a class="pages current" data-page="1" href="javascript:void(0);">1</a>
button: <a class="pages" data-page="2" href="javascript:void(0);">2</a>
button: <a class="pages" data-page="3" href="javascript:void(0);">3</a>
button: <a class="pages" data-page="4" href="javascript:void(0);">4</a>
button: <a class="pages" data-page="5" href="javascript:void(0);">5</a>
button: <a class="pages" data-page="6" href="javascript:void(0);">6</a>
button: <a class="pages current" data-page="1" href="javascript:void(0);">1</a>
button: <a class="pages" data-page="2" href="javascript:void(0);">2</a>
button: <a class="pages" data-page="3" href="javascript:void(0);">3</a>
button: <a class="pages" data-page="4" href="javascript:void(0);">4</a>
button: <a class="pages" data-page="5" href="javascript:void(0);">5</a>
button: <a class="pages" data-page="6" href="javascript:void(0);">6</a>

因此没有class ='next'的按钮。

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章