获取imgae src并通过python image crawler将图像保存在目录中

时间:2016-04-26 09:06:22

标签: python image web-crawler

我想创建一个python图像抓取工具。

这就是我现在所拥有的:

from bs4 import BeautifulSoup
from urllib.request import urlopen
url = 'http://blog.pouyacode.net/'
data = urlopen(url)
soup = BeautifulSoup(data, 'html.parser')
img = soup.findAll('img')
print (img)
print ('\n')
print ('****************************')
print ('\n')
for each in img:
    print(img.get('src'))
    print ('\n')

这部分有效:

print (img)
print ('\n')
print ('****************************')
print ('\n')

但在输出中*****************之后,会出现以下错误:

Traceback (most recent call last):
File "pull.py", line 15, in <module>
print(img.get('src'))
AttributeError: 'ResultSet' object has no attribute 'get'

那么如何获得所有图像的所有SRC? 如何将这些图像保存在目录中?

1 个答案:

答案 0 :(得分:2)

这样的东西?写自心灵而未经过测试

from bs4 import BeautifulSoup
from urllib.request import urlopen
import os

url = 'http://blog.pouyacode.net/'
download_folder = "downloads"

if not os.path.exists(download_folder):
    os.makedirs(download_folder)

data = urlopen(url)
soup = BeautifulSoup(data, 'html.parser')
img = soup.findAll('img')

for each in img:
    url = each.get('src')
    data = urlopen(url)
    with open(os.path.join(download_folder, os.path.basename(url)), "wb") as f:
        f.write(data.read())