Issue
I have a code which posts HTTP request sequentially for all the tuples in a list and append the response to another list.
import requests
url = 'https://www....'
input_list = [(1,2),(6,4),(7,8)]
output_list = []
for i in input_list:
resp = (requests.post('url',data = i[1]).text,i[0])
output_list.append(resp)
print(output_list)
Can someone please help me with the directions to make the HTTP requests in parallel ?
Solution
Since requests
library doesn't support asyncio natively, I'd use multiprocessing.pool.ThreadPool
, assuming the most of the time is spent waiting for IO. Otherwise it might be beneficial to use multiprocessing.Pool
from multiprocessing.pool import ThreadPool
import requests
url = 'https://www....'
input_list = [(1,2),(6,4),(7,8)]
def get_url(i):
return (requests.post('url',data = i[1]).text,i[0])
with ThreadPool(10) as pool: #ten requests to run in paralel
output_list = list(pool.map(get_url, input_list))
Answered By - Kryštof Vosyka
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.