找到html解析的美味汤中的所有链接

时间:2018-05-24 11:54:58

标签: python regex web-scraping beautifulsoup

我正在使用pys的beautifulsoup。在报废页面中,链接未包含在<a href>标记中。

我想使用汤操作获取所有以http / https开头的链接。我已经尝试了here给出了一些正则表达式,但它们给了我意想不到的结果。 所以我想如果有可能用汤?

我希望获得链接的示例回复:

<html>\n<head>\n</head>\n<link href="https://fonts.googleapis.com/css?family=Open+Sans:600" rel="stylesheet"/>\n<style>\n    html, body {\n    height: 100%;\n    width: 100%;\n    }\n\n    body {\n    background: #F5F6F8;\n    font-size: 16px;\n    font-family: \'Open Sans\', sans-serif;\n    color: #2C3E51;\n    }\n    .main {\n    display: flex;\n    align-items: center;\n    justify-content: center;\n    height: 100vh;\n    }\n    .main > div > div,\n    .main > div > span {\n    text-align: center;\n    }\n    .main span {\n    display: block;\n    padding: 80px 0 170px;\n    font-size: 3rem;\n    }\n    .main .app img {\n    width: 400px;\n    }\n  </style>\n<script type="text/javascript">\n      var fallback_url = "null";\n      var store_link = "itms-apps://itunes.apple.com/GB/app/id1032680895?ls=1&mt=8";\n      var web_store_link = "https://itunes.apple.com/GB/app/id1032680895?mt=8";\n      var loc = window.location;\n      function redirect_to_web_store(loc) {\n        loc.href = web_store_link;\n      }\n      function redirect(loc) {\n        loc.href = store_link;\n        if (fallback_url.startsWith("http")) {\n          setTimeout(function() {\n            loc.href = fallback_url;\n          },5000);\n        }\n      }\n  </script>\n<body onload="redirect(loc)">\n<div class="main">\n<div class="workarea">\n<div class="logo">\n<img onclick="redirect_to_web_store(loc)" src="https://cdnappicons.appsflyer.com/app|id1032680895.png" style="width:200px;height:200px;border-radius:20px;"/>\n</div>\n<span>BetBull: Sports Betting &amp; Tips</span>\n<div class="app">\n<img onclick="redirect_to_web_store(loc)" src="https://cdn.appsflyer.com/af-statics/images/rta/app_store_badge.png"/>\n</div>\n</div>\n</div>\n</body>\n</html>

尝试:

regex_pattern_to_find_all_links = r'(?:(?:https?|ftp):\/\/)?[\w/\-?=%.]+\.[\w/\-?=%.]+'
soup = BeautifulSoup(resp.read(), 'html.parser')
urls = re.findall(regex_pattern_to_find_all_links, str(soup))

结果:

['https://fonts.googleapis.com/css?family=Open', '//itunes.apple.com/GB/app/id1032680895?ls=1', 'https://itunes.apple.com/GB/app/id1032680895?mt=8', 'window.location', 'loc.href', 'loc.href', 'fallback_url.startsWith', 'loc.href', 'https://cdnappicons.appsflyer.com/app', 'id1032680895.png', 'https://cdn.appsflyer.com/af-statics/images/rta/app_store_badge.png']

正如您在上面所看到的,我不确定为什么正则表达式匹配甚至不是网址的内容。

What I have tried. 这里最受欢迎和接受的答案根本无法检测链接! 我不确定我做错了什么,

1 个答案:

答案 0 :(得分:1)

问题在于你使用了协议,如果对其余模式满意,引擎就不会强制匹配它。试试这个:

(?:(?:https?|ftp):\/\/|\bwww\.)[^\s"']+

不是防弹但更好。它匹配以https?ftp开头的字符串或没有协议但www.

的字符串

查看实时demo here