Issue
I'm new to scrapy and trying to scrape a page which has several links. Which I want to follow and scrape the content from that page as well, and from that page there is another link that I want to scrape.
I tried this path on shell and it worked but, I don't know what I am missing here. I want to be able to crawl through two pages by following the links. I tried reading through tutorials but I don't really understand what I am missing here.
This is my items.py file.
import scrapy
# item class included here
class ScriptsItem(scrapy.Item):
# define the fields for your item here like:
link = scrapy.Field()
attr = scrapy.Field()
And here is my scripts.py file.
import scrapy
import ScriptsItem
class ScriptsSpider(scrapy.Spider):
name = 'scripts'
allowed_domains = ['https://www.imsdb.com/TV/Futurama.html']
start_urls = ['http://https://www.imsdb.com/TV/Futurama.html/']
BASE_URL = 'https://www.imsdb.com/TV/Futurama.html'
def parse(self, response):
links = response.xpath('//table//td//p//a//@href').extract()
for link in links:
absolute_url = self.BASE_URL + link
yield scrapy.Request(absolute_url, callback=self.parse_attr)
def parse_attr(self, response):
item = ScriptsItem()
item["link"] = response.url
item["attr"] = "".join(response.xpath("//table[@class = 'script-details']//tr[2]//td[2]//a//text()").extract())
return item
Solution
Replace
import ScriptsItem
to
from your_project_name.items import ScriptsItem
your_project_name - Name of your project
Answered By - SenPyDev
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.