Issue
I am attempting to extract data from a table that shows a list of active bids from this site. I am a Scrapy newbie, and bit stuck as to why I get no downloaded files. I am able to output the file urls, but still unable to get scrapy to download the files from the listed urls. I am unable to figure out what I am missing or need to change. Any help towards this end will be highly appreciated!
Thanks!
I have the following code so far:
Here is my spider:
from government.items import GovernmentItem
import scrapy, urllib.parse
import scrapy
from government.items import GovernmentItem
class AlabamaSpider(scrapy.Spider):
name = 'alabama'
allowed_domains = ['purchasing.alabama.gov']
def start_requests(self):
url = 'https://purchasing.alabama.gov/active-statewide-contracts/'
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for row in response.xpath('//*[@class="table table-bordered table-responsive-sm"]//tbody//tr'):
yield {
'Description': row.xpath('normalize-space(./td[@class="col-sm-5"])').extract_first(),
'Bid File': row.xpath('td[@class="col-sm-1"]/a//@href').extract_first(),
'Begin Date': row.xpath('normalize-space(./td[@class="col-sm-1"][2])').extract_first(),
'End Date': row.xpath('normalize-space(./td[@class="col-sm-1"][3])').extract_first(),
'Buyer Name': row.xpath('td[@class="col-sm-3"]/a//text()').extract_first(),
'Vendor Websites': row.xpath('td[@class="col-sm-1"]/label/text()').extract_first(),
}
def parse_item(self, response):
file_url = response.xpath('td[@class="col-sm-1"]/a//@href').get()
#file_url = response.urljoin(file_url)
item = GovernmentItem()
item['file_urls'] = [file_url]
yield item
Here is items.py:
from scrapy.item import Item, Field
import scrapy
class GovernmentItem(Item):
file_urls = Field()
files = Field()
Here is my settings.py:
BOT_NAME = 'government'
SPIDER_MODULES = ['government.spiders']
NEWSPIDER_MODULE = 'government.spiders'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure item pipelines
ITEM_PIPELINES = {
'government.pipelines.GovernmentPipeline': 1,
'scrapy.pipelines.files.FilesPipeline': 1,
}
FILES_STORE = '/home/ken/Desktop/Projects/scrapy/government'
FILES_URL_FIELD = 'field_urls'
FILES_RESULT_FIELD = 'files'
MEDIA_ALLOW_REDIRECTS = True
DOWNLOAD_DELAY = 1
Solution
There are a few problems with your code:
- You never call "parse_item" function
file_url = response.xpath('td[@class="col-sm-1"]/a//@href').get()
will return none. you forgot to add '//' at the beggining.- You need to download each file separately. So get the download links with getall() and then handle them one by one.
The corrected code:
def parse_all_items(self, response):
all_urls = response.xpath('//td[@class="col-sm-1"]/a//@href').getall()
base_url = 'https://purchasing.alabama.gov'
for url in all_urls:
item = GovernmentItem()
item['file_urls'] = [base_url + url]
yield item
It will download all the files. Just make sure that you remember to call that function.
Alternative solution: Use the parse function you already have:
def parse(self, response):
base_url = 'https://purchasing.alabama.gov'
for row in response.xpath('//*[@class="table table-bordered table-responsive-sm"]//tbody//tr'):
url = row.xpath('td[@class="col-sm-1"]/a//@href').extract_first()
yield {
'Description': row.xpath('normalize-space(./td[@class="col-sm-5"])').extract_first(),
'Bid File': row.xpath('td[@class="col-sm-1"]/a//@href').extract_first(),
'Begin Date': row.xpath('normalize-space(./td[@class="col-sm-1"][2])').extract_first(),
'End Date': row.xpath('normalize-space(./td[@class="col-sm-1"][3])').extract_first(),
'Buyer Name': row.xpath('td[@class="col-sm-3"]/a//text()').extract_first(),
'Vendor Websites': row.xpath('td[@class="col-sm-1"]/label/text()').extract_first(),
}
if url:
item = GovernmentItem()
item['file_urls'] = [base_url + url]
yield item
Answered By - SuperUser
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.