Issue
I need to send requests in parallel using asyncio. The requests are sent via the function server_time
from a library that I can't change. It's a function, not a coroutine, so can't await it, which is the challenge here.
I've been trying with the code below but it doesn't work obviously, same reason you can parallelize await asyncio.sleep(1)
but not time.sleep(1)
.
How do I use asyncio to parallelize this? I.e., how can I parallelize something like time.sleep(1)
using asyncio?
from pybit import HTTP
import time
import asyncio
session = HTTP('https://api-testnet.bybit.com')
async def latency():
time_1 = time.perf_counter()
session.server_time()
return time.perf_counter() - time_1
async def avg_latency(n_requests):
total_time = 0
tasks = []
for _ in range(n_requests):
tasks.append(asyncio.create_task(latency()))
for task in tasks:
total_time += await task
return total_time / n_requests
# First one to establish the connection. The latency improves after.
session.server_time()
latency = asyncio.run(avg_latency(10))
print(f'{1000 * latency:.2f} ms')
Solution
You can use run_in_executor
to run the synchronous, I/O-bound session.server_time
function in parallel:
import asyncio
from pybit import HTTP
import time
session = HTTP('https://api-testnet.bybit.com')
async def latency():
time_1 = time.perf_counter()
loop = asyncio.get_running_loop()
await loop.run_in_executor(None, session.server_time)
return time.perf_counter() - time_1
async def avg_latency(n_requests):
return sum(await asyncio.gather(*[latency() for _ in range(n_requests)]))/n_requests
print(asyncior.run(avg_latency(10)))
Answered By - Ajax1234
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.