Python3代码问题

时间:2017-04-28 23:24:10

标签: python python-3.x web-scraping

我迷失了我的问题。这是终端中出现的内容,我得到一个没有信息的csv。

$ python3 test1.py

列表 - >

追踪(最近一次呼叫最后一次):

File "test1.py", line 162, in <module>

search_bing(i)

File "test1.py", line 131, in search_bing

driver.get("https://duckduckgo.com/?q=linkedin+" + n + "&t=hb&ia=web")

File 
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/selenium/webdriver/remote/webdriver.py", line 264, in get

self.execute(Command.GET, {'url': url})

File 
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/selenium/webdriver/remote/webdriver.py", line 252, in execute

self.error_handler.check_response(response)

File 
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/selenium/webdriver/remote/errorhandler.py", line 194, in 
check_response

raise exception_class(message, screen, stacktrace)

selenium.common.exceptions.WebDriverException: Message: unknown error: 
Runtime.executionContextCreated has invalid 'context': {"auxData":
{"frameId":"40864.1","isDefault":true},"id":1,"name":"","origin":"://"}

(Session info: chrome=58.0.3029.81)

(Driver info: chromedriver=2.9.248307,platform=Mac OS X 10.12.4 x86_64)

以下完整脚本。您可以忽略输入组代码作为我正在抓取的网站上的HTML去那里,这篇文章很长。

- - 编码:utf-8 - -

from bs4 import BeautifulSoup
from selenium import webdriver
import time
import csv

c = csv.writer(open("linkedin-group-results.csv", "w"))
c.writerow(["Member","Profile"])
driver = webdriver.Chrome(executable_path=r'/usr/local/bin/chromedriver')


your_groups_code = """

#enter group code here
"""

users = []
ul = []
def search_bing(name):
n = name.replace(" ", "+")
driver.get("https://duckduckgo.com/?q=linkedin+" + n + "&t=hb&ia=web")
time.sleep(3)
s = BeautifulSoup(driver.page_source, 'lxml')
fr = s.find("div", class_="result__body links_main links_deep")

for a in fr.find_all('a'):
    try:
        if 'linkedin.com/in' in a['href']:
            print ('found linkedin url'), a['href']
            if a['href'] in ul:
                print ('skipping dup')
            else:
                ul.append(a['href'])
                c.writerow([name, a['href']])
                break
    except Exception as e:
        print (e,'..continue')


soup = BeautifulSoup(your_groups_code, 'lxml')
for a in soup.find_all('img'):
name = a['alt']
if name in users:
    print ('skipping dup')
else:
    users.append(name)

if len(users) > 1:
print ('LIST -->'), users
for i in users:
    print ("Scraping"), i
    search_bing(i)
else:
print ('Congrats! Your making progress.. Now please insert the code of 
the linkedin group you want to scrape (as seen in tutorial)')

2 个答案:

答案 0 :(得分:0)

您已经删除了很多代码,因此调试起来很困难。据我所知,当你调用driver.get()时,你的代码在search_bing()方法中失败了。我尝试了这个代码的简单版本并且它有效,所以我建议你弄清楚&#39;名称是否有问题。 var你传入search_bing()。

#! /usr/bin/env python    
from bs4 import BeautifulSoup
from selenium import webdriver
import time
import csv

c = csv.writer(open("linkedin-group-results.csv", "w"))
c.writerow(["Member","Profile"])
driver = webdriver.Chrome(executable_path=r'/usr/local/bin/chromedriver')

name = 'John Smith'
n = name.replace(" ", "+")
driver.get("https://duckduckgo.com/?q=linkedin+" + n + "&t=hb&ia=web")

答案 1 :(得分:0)

您似乎使用的旧版ChromeDriver 2.9,很可能与Chrome 58不兼容。请下载并试用最新版本2.29。 请参阅说明:https://chromedriver.storage.googleapis.com/2.29/notes.txt

相关问题