Issue
It's normal to develop a simple implementation first. So, for example, we might start with a non-concurrent program, and then add concurrency. I'd like to be able to switch smoothly back and forth.
For example, single-threaded (pseudocode):
results=[]
for url in urls:
# This then calls other functions that call yet others
# in a call hierarchy, down to a requests.request() call.
get_result_and_store_in_database(url)
Async (pseudocode):
# The following calls other functions that call yet others
# in a call hierarchy, down to an asyncio ClientSession().get() call.
# It runs HTTP requests and store the results in a database.
# The multiple URLs are processed concurrently.
asyncio.run(get_results_in_parallel_and_store_in_db(urls))
With Python async/await
, usually you wrap the run with asyncio.run()
(compared to the loop you use in ordinary programs); then at the bottom of the call hierarchy, you use an IO action like aiohttp.ClientSession().get(url)
(compared to the ordinary requests.request()
.)
But, in the async version, all functions in the call hierarchy between those two must be written as async/await
. So, I need to write two copies of what is basically the same call-hierarchy, differing mostly in whether they have the async/await
keywords.
That is a lot of code duplication.
How can I make a switchably non-concurrent/async program?
Solution
It's really a big topic but not a general one. I personally have a private WebDAV project which implements both sync version and async version.
First, my WebDAV client accepts one argument named client
, which could be a requests.Session
or an aiohttp.ClientSession
to perform sync requests or async requests.
Second, I have a base class to implement all common logic like:
def _perform_dav_request(self, method, auth_tuple=None, client=None, **kwargs):
auth_tuple = self._get_auth_tuple(auth_tuple)
client = self._get_client(client)
data = kwargs.get("data")
headers = None
url = None
path = kwargs.get("path")
if path:
root_url = urljoin(self._base_url, self._dav_url)
url = root_url + path
from_path = kwargs.get("from_path")
to_path = kwargs.get("to_path")
if from_path and to_path:
root_url = urljoin(self._base_url, self._dav_url)
url = root_url + from_path
destination = root_url + quote(to_path)
headers = {
"Destination": destination
}
return client.request(method, url, data=data, headers=headers, auth=auth_tuple)
The fact is that both requests.Session
and aiohttp.ClientSession
support almost the same APIs, so here I could use an ambiguous call client.request(...)
.
Third, I have to export different APIs:
# In async client
async def ls(self, path, auth_tuple=None, client=None):
response = await self._perform_dav_request("PROPFIND", auth_tuple, client, path=path)
if response.status == 207:
return parse_ls(await response.read())
raise WebDavHTTPError(response.status, await response.read())
# In sync client
def ls(self, path, auth_tuple=None, client=None):
response = self._perform_dav_request("PROPFIND", auth_tuple, client, path=path)
if response.status_code == 207:
return parse_ls(response.content)
raise WebDavHTTPError(response.status_code, response.content)
So finally my users can use it like dav = DAV(...)
or dav = AsyncDAV(...)
.
This is how I deal with two different versions. I think the idea is that you can pass those coroutines through the function calls, and only evaluate them at the highest level. So you only need to write different code at the last level but have the same logic at all other levels.
Answered By - Sraw
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.