Issue
I am trying to do Multiprocessing
of my spider
. I know CrawlerProcess
runs the spider in a single process.
I want to run multiple times the same spider with different arguments.
I tried this but doesn't work.
How do I do multiprocessing?
Please do help. Thanks.
from scrapy.utils.project import get_project_settings
import multiprocessing
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess(settings=get_project_settings())
process.crawl(Spider, data=all_batches[0])
process1 = CrawlerProcess(settings=get_project_settings())
process1.crawl(Spider, data=all_batches[1])
p1 = multiprocessing.Process(target=process.start())
p2 = multiprocessing.Process(target=process1.start())
p1.start()
p2.start()
Solution
You need to run each scrapy
crawler instance inside a separate process. This is because scrapy
uses twisted, and you can't use it multiple times in the same process.
Also, you need to disable the telenet extension, because scrapy
will try to bind to the same port on multiple processes.
Test code:
import scrapy
from multiprocessing import Process
from scrapy.crawler import CrawlerProcess
class TestSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']
def parse(self, response):
for title in response.css('.post-header>h2'):
print('my_data -> ', self.settings['my_data'])
yield {'title': title.css('a ::text').get()}
def start_spider(spider, settings: dict = {}, data: dict = {}):
all_settings = {**settings, **{'my_data': data, 'TELNETCONSOLE_ENABLED': False}}
def crawler_func():
crawler_process = CrawlerProcess(all_settings)
crawler_process.crawl(spider)
crawler_process.start()
process = Process(target=crawler_func)
process.start()
return process
map(lambda x: x.join(), [
start_spider(TestSpider, data={'data': 'test_1'}),
start_spider(TestSpider, data={'data': 'test_2'})
])
Answered By - Hernan Di Giorgi
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.