BeautifulSoup循环未遍历其他节点

时间:2019-01-08 07:56:20

标签: python beautifulsoup

与此相关的场景非常相似;但是我一直在和别人比较。 Getting from Clustered Nodes等。我不确定为什么for loop不会从其他元素而是仅从节点的第一个元素开始迭代并获取文本。

from requests import get
from bs4 import BeautifulSoup

url = 'https://shopee.com.my/'
l = []

headers = {'User-Agent': 'Googlebot/2.1 (+http://www.google.com/bot.html)'}

response = get(url, headers=headers)
html_soup = BeautifulSoup(response.text, 'html.parser')


def findDiv():
     try:
        for container in html_soup.find_all('div', {'class': 'section-trending-search-list'}):
            topic = container.select_one(
                'div._1waRmo')
            if topic:
                print(1)
                d = {
                    'Titles': topic.text.replace("\n", "")}
                print(2)
                l.append(d)
        return d
    except:
        d = None

findDiv()
print(l)

the html elements i'm trying to access

2 个答案:

答案 0 :(得分:1)

尝试一下: 顶层是找到选项的根,然后我们找到该目录下的所有div。 我希望这就是你想要的。

from requests import get
from bs4 import BeautifulSoup

url = 'https://shopee.com.my/'
l = []

headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'}

response = get(url, headers=headers)
html_soup = BeautifulSoup(response.text, 'html.parser')


def findDiv():
    try:
        toplevel = html_soup.find('._25qBG5')
        for container in toplevel.find_all('div'):
            topic = container.select_one('._1waRmo')
            if topic:
                print(1)
                d = {'Titles': topic.text.replace("\n", "")}
                print(2)
                l.append(d)
                return d
    except:
        d = None

findDiv()
print(l)

这可以很好地枚举本地文件。当我尝试使用给定的url时,网站没有返回您显示的html。

from requests import get
from bs4 import BeautifulSoup

url = 'path_in_here\\test.html'
l = []

headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'}

example = open(url,"r")
text = example.read()

#response = get(url, headers=headers)
#html_soup = BeautifulSoup(response.text, 'html.parser')
html_soup = BeautifulSoup(text, 'html.parser')

print (text)

def findDiv():
    #try:
        print("finding toplevel")
        toplevel = html_soup.find("div", { "class":  "_25qBG5"} )
        print ("found toplevel")
        divs = toplevel.findChildren("div", recursive=True)
        print("found divs")

        for container in divs:
            print ("loop")
            topic = container.select_one('.1waRmo')
            if topic:
                print(1)
                d = {'Titles': topic.text.replace("\n", "")}
                print(2)
                l.append(d)
                return d
    #except:
    #    d = None
    #    print ("error")

findDiv()
print(l)

答案 1 :(得分:1)

from requests import get
from bs4 import BeautifulSoup

url = 'https://shopee.com.my/'
l = []

headers = {'User-Agent': 'Googlebot/2.1 (+http://www.google.com/bot.html)'}

response = get(url, headers=headers)
html_soup = BeautifulSoup(response.text, 'html.parser')


def findDiv():
     try:
        for container in html_soup.find_all('div', {'class': '_25qBG5'}):
            topic = container.select_one('div._1waRmo')
            if topic:
                d = {'Titles': topic.text.replace("\n", "")}
                l.append(d)
        return d
     except:
        d = None

findDiv()
print(l)

输出:

[{'Titles': 'school backpack'}, {'Titles': 'oppo case'}, {'Titles': 'baby chair'}, {'Titles': 'car holder'}, {'Titles': 'sling beg'}]

同样,我建议您使用selenium。如果再次运行此命令,将会看到列表中将包含5个字典的不同集合。每次您提出要求时,他们都会随机提供5种趋势商品。但是它们确实有一个“更改”按钮。如果您使用硒,则可以单击它并继续废弃所有趋势商品。

相关问题