Issue
I am trying to scrape data from different urls and I want to save the data in csv files with filename as the top level domain of the scraped url.
For example if am scraping data from https://www.example.com/event/abc
then the saved file name should be example.com
. The data is scraping in proper way, but I was not successful in saving the file with proper filename
Code
class myCrawler(CrawlSpider):
name = 'testing'
rotate_user_agent = True
base_url=''
start_urls = []
allowed_domains = ''
handle_httpstatus_list = [404,403]
custom_settings = {
# in order to reduce the risk of getting blocked
'DOWNLOADER_MIDDLEWARES': {'sitescrapper.sitescrapper.middlewares.RotateUserAgentMiddleware': 400,
'sitescrapper.sitescrapper.middlewares.ProjectDownloaderMiddleware': 543, },
'COOKIES_ENABLED': False,
'CONCURRENT_REQUESTS': 6,
'DOWNLOAD_DELAY': 2,
'DEPTH_LIMIT' : 1,
'CELERYD_MAX_TASKS_PER_CHILD' : 1,
# Duplicates pipeline
'ITEM_PIPELINES': {'sitescrapper.sitescrapper.pipelines.DuplicatesPipeline': 300},
# In order to create a CSV file:
'FEEDS': {'%(allowed_domains).csv': {'format': 'csv'}},
}
def __init__(self, category='', **kwargs):
self.base_url = category
self.allowed_domains = ['.'.join(urlparse(self.base_url).netloc.split('.')[-2:])]
self.start_urls.append(self.base_url)
print(f"Base url is {self.base_url} and allowed domain is {self.allowed_domains}")
self.rules = (
Rule(
LinkExtractor(allow_domains=self.allowed_domains),
process_links=process_links,
callback='parse_item',
follow=True
),
)
super().__init__(**kwargs)
Thanks in advance
Solution
We can specify the location of download and set the filename dynamically by using
'FEEDS': {"./scraped_urls/%(file_name)s" : {"format": "csv"}},
in custom_settings
Answered By - imhans4305
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.