By declaring a large-enough integer range, all 32-bit signed integers appear to be representable except for the most negative integer, -2147483648 (i.e., -2^31). If I declare int[-2147483648,2147483647] x; then (a) the assignment x = -2147483648 actually causes x == -2147483647 in the simulator. (b) the assignment x = -2147483647 followed by an edge x = x-1 causes the simulator to complain about the successors not being well-defined. (c) the assignment x = -2147483647 followed by an edge x = x-2 does wrap around properly according to 32-bit arithmetic, leading to x == 2147483647 in the simulator. For faithful modeling it would be nice to have the entire 32-bit integer range available, even if any particular simulation only uses a small subset of this range. If this is not a bug but a implementation decision, it would be nice if the allowable integers were documented. (I looked, but only found the default 16-bit range mentioned in the help file.)
This is caused by the lexer that only recognizes (non-negative) natural numbers.
Fixed in rev. 4537 for 4.0 and 4536 in the trunk. Now constant overflows are also detected.
*** Bug 471 has been marked as a duplicate of this bug. ***