Issue
I'm trying to simulate a system of ODEs. While doing so, I need to increase the current value of certain variables by some factor at specific time points when the odeint runs?
I tried doing the following. But what i could notice is that the time values are in floating point. This makes it difficult for me to specify an if-condition for adding a certain value to the inputs that are going to be integrated further in the process.
Below is the problem case. Please help me out with this.
def myfunc(s,t):
# whenever the time is an even day, increase the variable by 2
if t%2==0:
addition = 2
else:
addition = 0
dsdt = (2s+8)+addition
return dsdt
Problem: The incoming time(t) in the function is a floating point number. This prevents me from applying a if condition for specific discrete even values of 't'
Detailed description:
(a)I define a timespan vector , Tspan = np.linspace(1,100,100), and a initial condition s0 = [3].
(b) When I run the " odeint(myfunc, s0, Tspan) ", I need to update the incoming 's' variable by some factor, only at certain timepoints ( Say, for t = 25,50,75).
(c) But for me to this, if I place print(t) inside the "myfunc(s,t)", I could watch out that the incoming 't' is in float type.
(d) And one important note is that the # myfunc is called > #Timesteps in the Tspan vector. This is why the runtime 't' is in floating points.
(e) with this said if i try to perform "if ceil(t)%25==0 or round" the same int is returned for next 4 to 5 function calls ( this is because the there are few number of function calls happening between two subsequent timepoints), as a result, if I try to update the incoming 's' with an if condition on the ceiled(t), the update on 's' is performed for 4 to 5 subsequent function calls instead of once at a specific time point, and this should be avoided.
I hope my problem is clear. Please help me out if you could, in someway. Thanks folks!
Solution
All "professional" solvers use an internal adaptive step size. The step size for the output has no or minimal influence on this. Each method step uses multiple evaluations of the ODE function. Depending on the output sampling frequency, you can have multiple internal steps per output step, or multiple output steps get interpolated from the same internal step.
What you describe as desired mechanism is different from your example code. Your example code changes a parameter of the ODE. The description amounts to a state change. While the first can be done with deleterious but recoverable effects on the step size controller, the second requires an event-action mechanism with a state-changing action. Such is not implemented in any of the scipy solvers.
The best way is to solve for the segments between the changes, perform the state change at the end of each segment and restart the integration with the new state. Use array concatenation on the time and value segments to get the large solution function table.
t1=np.linspace(0,25,25+1);
u10=u0
u1=odeint(fun,u10,t1);
t2=t1+25; # or a more specific construction for non-equal interval lengths
u20 = 3*u1[-1] # or some other change of the last state u1[-1]
u2=odeint(fun,u20,t2);
t3=t2+25;
u30 = u2[-1].copy();
u30[0] -=5; # or what the actual state change was supposed to be
u3=odeint(fun,u30,t3);
# combine the segments for plotting, gives vertical lines at the event locations
t=np.concatenate([t1,t2,t3]);
u=np.concatenate([u1,u2,u3]);
For more segments it is better do organize this in a loop, especially if the state change at the event locations via an uniform procedure depending on a few parameters.
Answered By - Lutz Lehmann
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.