Issue
I need help about this specific scenario.
Scenario
- Calling site
I can get this information from <script>
tag
using the key , I have to call this endpoint
for retrieving the SessionID which is stored inside the javascript response
-- omitted
private._sessID='MYSESSIONID';
-- omitted
At the end, using this sessionId and performing right POST actions , I can navigate inside all pages I need.
My stalemate
I'm able to simulate all steps using scrapy shell
with regEx
(and all work fine), but I don't know how to manage these steps inside a scrapy spider before starting data extraction.
Can someone help me out?
Solution
You need to start with base URL http://www.example.com/index.php
by calling it in start request method and write its callback and extract information from other endpoint and take that result into other callback and then you can start scraping process.
You need to implement in the following way
class CrawlSpider(scrapy.CrawlSpider):
def parse_authentication_token(self, response):
//extract token or whatever require and then call supers parse
yield from super().parse()
def start_request(self):
return Request(url, callback=self.parse_authentication_token)
Answered By - Ahmed Buksh
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.