使用美丽的汤来抓取多个网址

时间:2016-11-16 10:15:48

标签: python beautifulsoup

我正在尝试从多个网址中提取特定的类。标签和类保持不变,但我需要我的python程序来抓取所有,因为我只是输入我的链接。

以下是我的工作样本:

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

url = input('insert URL here: ')
#scrape elements
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")

#print titles only
h1 = soup.find("h1", class_= "class-headline")
print(h1.get_text())

这适用于单个URL,但不适用于批处理。谢谢你的帮助。我从这个社区学到了很多东西。

2 个答案:

答案 0 :(得分:5)

有一个网址列表并迭代它。

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

urls = ['www.website1.com', 'www.website2.com', 'www.website3.com', .....]
#scrape elements
for url in urls:
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser")

    #print titles only
    h1 = soup.find("h1", class_= "class-headline")
    print(h1.get_text())

如果您要提示用户输入每个站点,那么可以这样做

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

urls = ['www.website1.com', 'www.website2.com', 'www.website3.com', .....]
#scrape elements
msg = 'Enter Url, to exit type q and hit enter.'
url = input(msg)
while(url!='q'):
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser")

    #print titles only
    h1 = soup.find("h1", class_= "class-headline")
    print(h1.get_text())
    input(msg)

答案 1 :(得分:2)

如果您想分批刮取链接。指定批量大小并迭代它。

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

batch_size = 5
urllist = ["url1", "url2", "url3", .....]
url_chunks = [urllist[x:x+batch_size] for x in xrange(0, len(urllist), batch_size)]

def scrape_url(url):
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser")
    h1 = soup.find("h1", class_= "class-headline")
    return (h1.get_text())

def scrape_batch(url_chunk):
    chunk_resp = []
    for url in url_chunk:
        chunk_resp.append(scrape_url(url))
    return chunk_resp

for url_chunk in url_chunks:
    print scrape_batch(url_chunk)