Issue
I am facing an error with the web crawling. I stuck in this from past two days. Please can anyone guide me regarding this scrapy error.
Error says: pider error processing <GET http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html> (referer: http://books.toscrape.com/) Traceback (most recent call last):
Here is command prompt output error message
2021-09-28 22:16:24 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2021-09-28 22:16:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/> (referer: None)
2021-09-28 22:16:25 [scrapy.core.scraper] DEBUG: Scraped from <200 http://books.toscrape.com/>
{'Category_Name': 'Historical Fiction', 'Kategorylink': 'http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html'}
2021-09-28 22:16:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html> (referer: http://books.toscrape.com/)
2021-09-28 22:16:26 [scrapy.core.scraper] ERROR: Spider error processing <GET http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html> (referer: http://books.toscrape.com/)
Traceback (most recent call last):
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback
yield next(it)
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\utils\python.py", line 353, in __next__
return next(self.data)
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\utils\python.py", line 353, in __next__
return next(self.data)
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
for r in iterable:
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
for r in iterable:
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 342, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
for r in iterable:
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\urllength.py",
line 40, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
for r in iterable:
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
for r in iterable:
File "D:\tutorials\WEB scrapping\web scraping practice projects\scrapybooksspider\scrapybooksspider\spiders\selnext.py", line 23, in info_parse
Category_Name=response.request.meta('category_name')
TypeError: 'dict' object is not callable
2021-09-28 22:16:26 [scrapy.core.engine] INFO: Closing spider (finished)
2021-09-28 22:16:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 529,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 11464,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 1.373366,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 9, 29, 5, 16, 26, 107603),
'httpcompression/response_bytes': 101403,
'httpcompression/response_count': 2,
'item_scraped_count': 1,
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'spider_exceptions/TypeError': 1,
'start_time': datetime.datetime(2021, 9, 29, 5, 16, 24, 734237)}
2021-09-28 22:16:26 [scrapy.core.engine] INFO: Spider closed (finished)
here is my code:
import scrapy
from scrapy.http import HtmlResponse
import requests
from bs4 import BeautifulSoup
class ScrapSpider(scrapy.Spider):
name = 'scrapp'
allowed_domains = ['books.toscrape.com']
start_urls = ['http://books.toscrape.com/']
def parse(self, response):
categ=response.xpath('//div[@class="side_categories"]/ul[@class="nav nav-list"]/li/ul/li')
# for category in categ:
Category_Name=categ.xpath('.//a[contains(text(),"Historical Fiction")]/text()').get().replace('\n',"").strip()
Kategorylink=categ.xpath('.//a[contains(text(),"Historical Fiction")]/@href').get().replace('\n',"").strip()
yield{
'Category_Name':Category_Name,
'Kategorylink':response.urljoin(Kategorylink)
}
yield scrapy.Request(url=response.urljoin(Kategorylink),callback=self.info_parse,meta={'category_name':Category_Name,'category_link':Kategorylink})
def info_parse(self,response):
Category_Name=response.request.meta('category_name')
Kategorylink=response.request.meta('category_link')
Book_Frame=response.xpath('//section/div/ol/li/article[@class="product_pod"]/h3/a/@href')
for books in Book_Frame:
yield scrapy.Request(url=response.urljoin(books),callback=self.book_info)
def book_info(self,response):
Category_Name=response.request.meta('category_name')
Kategorylink=response.request.meta('category_link')
name= response.xpath('//*[@class="price_color"]/text()').get()
yield{
'Category_Name':Category_Name,
'Categorylink':Kategorylink,
'Books':name
}
Waiting for awesome support. Thanks!
Solution
You have 3 problems:
- Change
response.request.meta
toresponse.meta.get
yield scrapy.Request(url=response.urljoin(books),callback=self.book_info)
, look at the 'books' url and see why you can't join them, you should change it toresponse.follow(url=books, callback=self.book_info
- You forgot to pass the meta data to 'book_info' function.
import scrapy
class ScrapSpider(scrapy.Spider):
name = 'scrapp'
allowed_domains = ['books.toscrape.com']
start_urls = ['http://books.toscrape.com/']
def parse(self, response):
categ=response.xpath('//div[@class="side_categories"]/ul[@class="nav nav-list"]/li/ul/li')
# for category in categ:
Category_Name=categ.xpath('.//a[contains(text(),"Historical Fiction")]/text()').get().replace('\n',"").strip()
Kategorylink=categ.xpath('.//a[contains(text(),"Historical Fiction")]/@href').get().replace('\n',"").strip()
yield{
'Category_Name':Category_Name,
'Kategorylink':response.urljoin(Kategorylink)
}
yield scrapy.Request(url=response.urljoin(Kategorylink),callback=self.info_parse,meta={'category_name':Category_Name,'category_link':Kategorylink})
def info_parse(self,response):
Category_Name=response.meta.get('category_name')
Kategorylink=response.meta.get('category_link')
Book_Frame=response.xpath('//section/div/ol/li/article[@class="product_pod"]/h3/a/@href')
for books in Book_Frame:
yield response.follow(url=books, callback=self.book_info, meta={'category_name':Category_Name,'category_link':Kategorylink})
def book_info(self,response):
Category_Name=response.meta.get('category_name')
Kategorylink=response.meta.get('category_link')
name= response.xpath('//*[@class="price_color"]/text()').get()
yield{
'Category_Name':Category_Name,
'Categorylink':Kategorylink,
'Books':name
}
Answered By - SuperUser
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.