This issue tracker is closed. Please visit UPPAAL issue tracker at Github instead.

Bug 488

Summary: A missing 32-bit signed integer
Product: UPPAAL Reporter: Christopher A. Stone <stone>
Component: libutapAssignee: Alexandre David <adavid>
Status: RESOLVED FIXED    
Severity: minor CC: marius
Priority: P2    
Version: unspecified   
Hardware: Macintosh   
OS: Mac OS   
Architecture:
Bug Depends on:    
Bug Blocks: 471    

Description Christopher A. Stone 2010-04-30 19:05:58 CEST
By declaring a large-enough integer range, all 32-bit signed integers appear to be representable except for the most negative integer, -2147483648  (i.e., -2^31).

If I declare
  int[-2147483648,2147483647] x;

then

(a) the assignment
         x = -2147483648
      actually causes
         x == -2147483647
      in the simulator.

(b) the assignment 
         x = -2147483647
      followed by an edge
         x = x-1
      causes the simulator to complain about the successors
      not being well-defined.

(c) the assignment
         x = -2147483647
      followed by an edge
         x = x-2
      does wrap around properly according to 32-bit arithmetic, 
      leading to  
         x == 2147483647
      in the simulator.

For faithful modeling it would be nice to have the entire 32-bit integer range available, even if any particular simulation only uses a small subset of this range.  

If this is not a bug but a implementation decision, it would be nice if the allowable integers were documented.  (I looked, but only found the default 16-bit range mentioned in the help file.)
Comment 1 Alexandre David 2010-05-25 16:33:11 CEST
This is caused by the lexer that only recognizes (non-negative) natural numbers.
Comment 2 Alexandre David 2010-05-25 16:50:07 CEST
Fixed in rev. 4537 for 4.0 and 4536 in the trunk.
Now constant overflows are also detected.
Comment 3 Marius Mikučionis 2010-05-25 17:06:03 CEST
*** Bug 471 has been marked as a duplicate of this bug. ***