Issue
I have a page that i need to get the source to use with BS4, but the middle of the page takes 1 second(maybe less) to load the content, and requests.get catches the source of the page before the section loads, how can I wait a second before getting the data?
r = requests.get(URL + self.search, headers=USER_AGENT, timeout=5 )
soup = BeautifulSoup(r.content, 'html.parser')
a = soup.find_all('section', 'wrapper')
<section class="wrapper" id="resultado_busca">
Solution
It doesn't look like a problem of waiting, it looks like the element is being created by JavaScript, requests
can't handle dynamically generated elements by JavaScript. A suggestion is to use selenium
together with PhantomJS
to get the page source, then you can use BeautifulSoup
for your parsing, the code shown below will do exactly that:
from bs4 import BeautifulSoup
from selenium import webdriver
url = "http://legendas.tv/busca/walking%20dead%20s03e02"
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
a = soup.find('section', 'wrapper')
Also, there's no need to use .findAll
if you are only looking for one element only.
Answered By - Vinícius Figueiredo
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.