Issue
Consider the example:
>>> from sys import maxint
>>> type(maxint)
<type 'int'>
>>> print maxint
9223372036854775807
>>> type(maxint+2)
<type 'long'>
>>> print maxint+2
9223372036854775809
>>> type((maxint+2)+maxint)
<type 'long'>
>>> print ((maxint+2)+maxint)
18446744073709551616
Python will autopromote from an int
, which in this case is a 64 bit integer value (OS X, python 2.6.1) to a python long
integer which is of arbitrary precision. Even though the types are not the same, they are similar and Python allows the usual numeric operators to be used. Usually this is helpful, for example to be able to use code that expects 64-bit values on a 32-bit machine.
However, the arbitrary precision operations are vastly slower than the native int operations. For example:
>>> print maxint**maxint # execution so long it is essentially a crash
Is there a way to defeat or disallow the auto-promotion of a Python int
to a Python long
?
Solution
So you want to throw out the One True Way and go retro on overflows. Silly you.
There is no good upside to the C / C++ / C# / Java style of overflow. It does not reliably raise an error condition. For C and C99 it is "undefined behavior" in ANSI and POSIX (C++ mandates modulo return) and it is a known security risk. Why do you want this?
The Python method of seamlessly overflowing to a long is the better way. I believe this is the same behavior being adapted by Perl 6.
You can use the Decimal module to get more finite overflows:
>>> from decimal import *
>>> from sys import maxint
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999, capitals=1,
flags=[], traps=[DivisionByZero, Overflow, InvalidOperation])
>>> d=Decimal(maxint)
>>> d
Decimal('9223372036854775807')
>>> e=Decimal(maxint)
>>> f=d**e
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 2225, in __pow__
ans = ans._fix(context)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 1589, in _fix
return context._raise_error(Overflow, 'above Emax', self._sign)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 3680, in _raise_error
raise error(explanation)
decimal.Overflow: above Emax
You can set your precision and boundary conditions with Decimal classes and the overflow is nearly immediate. You can set what you trap for. You can set your max and min. Really -- How does it get better than this? (I don't know about relative speed to be honest, but I suspect it is faster than numby but slower than native ints obviously...)
For your specific issue of image processing, this sounds like a natural application to consider some form of saturation arithmetic. You also might consider, if you are having overflows on 32 arithmetic, check operands along the way on obvious cases: pow, **, *. You might consider overloaded operators and check for the conditions you don't want.
If Decimal, saturation, or overloaded operators don't work -- you can write an extension. Heaven help you if you want to throw out the Python way of overflow to go retro...
Answered By - the wolf
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.