从网页中提取特定链接的数量。

时间:2018-03-06 08:47:07

标签: python web-scraping beautifulsoup

我正在使用BeautifulSoup编写python脚本。我需要抓取一个网站并计算唯一的链接,忽略以“#”开头的链接。

如果网页上存在以下链接,请执行以下示例:

https://www.stackoverflow.com/questions

https://www.stackoverflow.com/foo

https://www.cnn.com/

对于此示例,将只有两个唯一链接(删除主域名后的链接信息):

https://stackoverflow.com/    Count 2
https://cnn.com/              Count 1

注意:这是我第一次使用python和web抓取工具。

我提前感谢所有的帮助。

这是我到目前为止所尝试的:

from bs4 import BeautifulSoup
import requests


url = 'https://en.wikipedia.org/wiki/Beautiful_Soup_(HTML_parser)'

r = requests.get(url)

soup = BeautifulSoup(r.text, "html.parser")


count = 0

for link in soup.find_all('a'):
    print(link.get('href'))
    count += 1

3 个答案:

答案 0 :(得分:2)

urllib.parse中有一个名为urlparse的函数,您可以获取netloc个网址。还有一个名为requests_html的新的awesome HTTP库可以帮助您获取源文件中的所有链接。

from requests_html import HTMLSession
from collections import Counter
from urllib.parse import urlparse

session = HTMLSession()
r = session.get("the link you want to crawl")
unique_netlocs = Counter(urlparse(link).netloc for link in r.html.absolute_links)
for link in unique_netlocs:
    print(link, unique_netlocs[link])

答案 1 :(得分:0)

你也可以这样做:

from bs4 import BeautifulSoup
from collections import Counter
import requests

soup = BeautifulSoup(requests.get("https://en.wikipedia.org/wiki/Beautiful_Soup_(HTML_parser)").text, "html.parser")

foundUrls = Counter([link["href"] for link in soup.find_all("a", href=lambda href: href and not href.startswith("#"))])
foundUrls = foundUrls.most_common()

for item in foundUrls:
    print ("%s: %d" % (item[0], item[1]))

soup.find_all行检查每个a代码是否设置了href,以及它是否以#字符开头。 Counter方法按值计算每个列表条目和most_common订单的出现次数。

for循环只打印结果。

答案 2 :(得分:0)

我的方法是使用漂亮的汤查找所有链接,然后确定哪个链接重定向到哪个位置:

def get_count_url(url): # get the umber of links having the same domain and suffix
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
count = 0
urls={} #dictionary for the domains
# input_domain=url.split('//')[1].split('/')[0]
#library to extract the exact domain( ex.- blog.bbc.com and bbc.com have the same domains )
input_domain=tldextract.extract(url).domain+"."+tldextract.extract(url).suffix 
for link in soup.find_all('a'):
    word =link.get('href')
    # print(word)
    if word:
        # Same website or domain calls
        if "#" in word or word[0]=="/": #div call or same domain call
            if not input_domain in urls:
                # print(input_domain)
                urls[input_domain]=1 #if first encounter with the domain
            else:
                urls[input_domain]+=1 #multiple encounters
        elif "javascript" in word:
            # javascript function calls (for domains that use modern JS frameworks to display information)
            if not "JavascriptRenderingFunctionCall" in urls:
                urls["JavascriptRenderingFunctionCall"]=1
            else:
                urls["JavascriptRenderingFunctionCall"]+=1
        else:
            # main_domain=word.split('//')[1].split('/')[0]
            main_domain=tldextract.extract(word).domain+"." +tldextract.extract(word).suffix
            # print(main_domain)
            if main_domain.split('.')[0]=='www':
                main_domain = main_domain.replace("www.","") # removing the www
            if not main_domain in urls: # maintaining the dictionary
                urls[main_domain]=1
            else:
                urls[main_domain]+=1
        count += 1

for key, value in urls.items(): # printing the dictionary in a paragraph format for better readability
    print(key,value)
return count    

tld提取找到正确的URL名称,然后汤.find_all('a')找到一个标签。 if语句检查相同的域重定向,javascript重定向或其他域重定向。