美丽的汤 - 无法从分页页面中删除链接

时间:2017-07-26 07:24:55

标签: python-2.7 loops web-scraping pagination beautifulsoup

我无法抓取分页网页中存在的文章的链接。另外,我有时会得到一个空白屏幕作为输出。我无法在循环中找到问题。此外,还没有创建csv文件。

from pprint import pprint
import requests
from bs4 import BeautifulSoup
import lxml
import csv
import urllib2

def get_url_for_search_key(search_key):
    for i in range(1,100):
        base_url = 'http://www.thedrum.com/'
        response = requests.get(base_url + 'search?page=%s&query=' + search_key +'&sorted=')%i
        soup = BeautifulSoup(response.content, "lxml")
        results = soup.findAll('a')
        return [url['href'] for url in soup.findAll('a')]
        pprint(get_url_for_search_key('artificial intelligence'))

with open('StoreUrl.csv', 'w+') as f:
    f.seek(0)
    f.write('\n'.join(get_url_for_search_key('artificial intelligence')))

1 个答案:

答案 0 :(得分:1)

您确定,您只需要前100页吗?也许还有更多......

我对下面的任务的看法,这将收集所有页面的链接,并精确捕捉下一页按钮链接:

import requests
from bs4 import BeautifulSoup


base_url = 'http://www.thedrum.com/search?sort=date&query=artificial%20intelligence'
response = requests.get(base_url)
soup = BeautifulSoup(response.content, "lxml")

res = []

while 1:
    results = soup.findAll('a')
    res.append([url['href'] for url in soup.findAll('a')])

    next_button = soup.find('a', text='Next page')
    if not next_button:
        break
    response = requests.get(next_button['href'])
    soup = BeautifulSoup(response.content, "lxml")

编辑:仅收集文章链接的替代方法:

import requests
from bs4 import BeautifulSoup


base_url = 'http://www.thedrum.com/search?sort=date&query=artificial%20intelligence'
response = requests.get(base_url)
soup = BeautifulSoup(response.content, "lxml")

res = []

while 1:
    search_results = soup.find('div', class_='search-results') #localizing search window with article links
    article_link_tags = search_results.findAll('a') #ordinary scheme goes further 
    res.append([url['href'] for url in article_link_tags])

    next_button = soup.find('a', text='Next page')
    if not next_button:
        break
    response = requests.get(next_button['href'])
    soup = BeautifulSoup(response.content, "lxml")

打印链接使用:

for i in res:
    for j in i:
        print(j)