Issue
I want to follow all the links of the website and get the status of every links like 404,200. I tried this:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class someSpider(CrawlSpider):
name = 'linkscrawl'
item = []
allowed_domains = ['mysite.com']
start_urls = ['//mysite.com/']
rules = (Rule (LinkExtractor(), callback="parse_obj", follow=True),
)
def parse_obj(self,response):
item = response.url
print(item)
I can see the links without status code on the console like:
mysite.com/navbar.html
mysite.com/home
mysite.com/aboutus.html
mysite.com/services1.html
mysite.com/services3.html
mysite.com/services5.html
but how to save in text file with status of all links?
Solution
I solved this as below. Hope this will help for anyone who needs.
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class LinkscrawlItem(scrapy.Item):
# define the fields for your item here like:
link = scrapy.Field()
attr = scrapy.Field()
class someSpider(CrawlSpider):
name = 'linkscrawl'
item = []
allowed_domains = ['mysite.com']
start_urls = ['//www.mysite.com/']
rules = (Rule (LinkExtractor(), callback="parse_obj", follow=True),
)
def parse_obj(self,response):
item = LinkscrawlItem()
item["link"] = str(response.url)+":"+str(response.status)
filename = 'links.txt'
with open(filename, 'a') as f:
f.write('\n'+str(response.url)+":"+str(response.status)+'\n')
self.log('Saved file %s' % filename)
Answered By - bhattraideb
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.