Issue
For instance, I run 4 scripts sequentially:
%run -i script1.py
%run -i script2.py
%run -i script3.py
%run -i script4.py
The time of execution of each quite long. Is there any way in iPython notebook to run the scripts in parallel and return the local variables from all of them(2 or 3 variables which are important)? In the sequential execution it works fine but long. Thank you in advance.
I've tried to apply the code from this topic but stuck on the first part:
def my_func(my_file):
!python pgm.py my_file
or in my case:
def my_func(my_file):
%run -i $my_file
I can see that execution of the code is happening but after this I can not see the local variables from these scripts.
Solution
Let's assume you started 4 engines and you'll send your 4 scripts on each.
After doing
rc = parallel.Client()
view = rc.load_balanced_view()
r = view.map_async(my_func, ["script1.py", "script2.py", "script3.py", "script4.py"])
Once it's finished, you can access, let's say variable a
and b
with pull
:
var = rc[:].pull(["a","b"])
var.result # or var.r or var.result_dict
[[10, 12020.2], [11, 14], [1, 0], [1, 14425]]
Which correspond to the value of a
and b
after each the run of each script.
So in script1.py
, at the end, you have a==10
and b==12020.2
.
Hope this helps.
By the way, I edited a bit the link you refer to, there was a small mistake:
def my_func(my_file):
!python pgm.py my_file
should be:
def my_func(my_file):
!python pgm.py {my_file}
Answered By - jrjc
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.