用漂亮的汤刮网(体育数据)

时间:2019-04-16 11:06:21

标签: python web-scraping beautifulsoup

当我尝试加载此代码时,出现两个错误。 1:第一个是我无法正确抓取name_text的数据。

2:我收到team = name_text.div.text的缩进错误。我知道这可能很容易解决,但是我尝试了不同的缩进,但似乎没有任何效果。

我要在网站上刮擦球队名称和赔率。

<div class="size14_f7opyze Endeavour_fhudrb0 medium_f1wf24vo participantText_fivg86r" data-automation-id="participant-one">Orlando Magic</div>
<div class="priceText_f71sibe"><span class="size14_f7opyze medium_f1wf24vo priceTextSize_frw9zm9" data-automation-id="price-text">5.85</span></div>

上面的html已从网站复制。

from bs4 import BeautifulSoup
from urllib.request import urlopen as uReq
my_url = 'https://www.sportsbet.com.au/betting/basketball-us'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

soup = BeautifulSoup(page_html, "html.parser")

price_text = soup.findAll("div",{"class":"priceText_f71sibe"})
name_text = soup.findAll("div",{"class":"size14_f7opyze Endeavour_fhudrb0 medium_f1wf24vo participantText_fivg86r"})
filename = "odds.csv"
f = open(filename,"w")
headers = "Team, odds_team\n"
print(name_text)
f.write(headers)

for price_text in price_texts:
team = name_text.div.text
odds = price_text.span.text

print(odds)
print(team + odds)
f.write(team + "," + odds + "\n")
f.close()

任何帮助都会很棒。干杯。

3 个答案:

答案 0 :(得分:1)

您的for loop缩进不正确。正确的缩进应为:

for price_text in price_texts:
    team = name_text.div.text
    odds = price_text.span.text
    team = name_text.div.text
    odds = price_text.span.text

    print(odds)
    print(team + odds)
    f.write(team + "," + odds + "\n")
f.close()

团队和赔率前有4个空格。请阅读Python ForLoop documentation

也没有price_texts变量。当您 findAll ,您忘记了“ S”时,需要分配它:

price_texts = soup.findAll("div",{"class":"priceText_f71sibe"})

最后,考虑使用with代替open().close()来写入文件。

答案 1 :(得分:0)

我在想,您可以做的只是迭代并将它们存储到列表中,然后写入文件。不幸的是,我无法在工作中访问该网站,因此无法测试代码,但是我相信这应该能够提供您所需要的输出:

from bs4 import BeautifulSoup
from urllib.request import urlopen as uReq
import csv
from itertools import zip_longest

my_url = 'https://www.sportsbet.com.au/betting/basketball-us'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

soup = BeautifulSoup(page_html, "html.parser")

price_text = soup.findAll("span",{"data-automation-id":"price-text"})
name_text = soup.findAll("div",{"data-automation-id":"participant-one"})

team_list = [ name.text.strip() for name in name_text ]
odds_list = [ price.text.strip() for price in price_text ]

d = [team_list, odds_list]
export_data = zip_longest(*d, fillvalue = '')
with open('odds.csv', 'w', encoding="ISO-8859-1", newline='') as myfile:
      wr = csv.writer(myfile)
      wr.writerow(("Team", "odds_team"))
      wr.writerows(export_data)
myfile.close()

答案 2 :(得分:0)

您可以尝试这个吗?

from bs4 import BeautifulSoup
from urllib.request import urlopen as uReq
my_url = 'https://www.sportsbet.com.au/betting/basketball-us'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

soup = BeautifulSoup(page_html, "html.parser")

price_texts = soup.findAll("div",{"class":"priceText_f71sibe"})
name_texts = soup.findAll("div",{"class":"size14_f7opyze Endeavour_fhudrb0 medium_f1wf24voparticipantText_fivg86r"})
filename = "odds.csv"
f = open(filename,"w")
headers = "Team, odds_team\n"
print(name_text)
f.write(headers)

odds =''
team=''
for price_text in price_texts:
    odds = price_text.text
for name_text in name_texts:
    team = name_text.text
print(odds)
print(team + odds)
f.write(team + "," + odds + "\n")
f.close()