Previously, if a JSON file contained a string in hexadecimal Unicode
representation, e.g. "\u0001", the JSON parser would discard the "\u"
part and store the string as "0001". This commit fixes this so the
resulting string is equal to "\u0001".
This adds sharing map unit tests to check that operations fail as expected. For
example, calling map.replace(key, value) when the key does not exist in the map
should fail.
This adds unit tests to check that the references into the sharing map in the
views and delta views remain valid after operations erase(), insert(), and
replace(). The references should remain valid to those elements that are not
changed by the respective operations.
This permits an in-place update, avoiding needless copy-out / mutate / move-in cycles for
expensive-to-copy value types without leaking a non-const reference to a value.
These were accidentally disabled when distinguishing ID_is_dynamic_object (a predicate that tests
whether an object is dynamic) from ID_dynamic_object (a reference to the object itself, similar to
symbol_exprt). I also take the opportunity to restore pretty-printing of dynamic object expressions
(while also keeping pretty-printing of the predicate).
Previously we fixed the extracted bytes to be unsigned bitvectors, but
we should not actually impose (un)signedness as we do not actually
interpret the bytes as numeric values. This fixes byte operators over
floating-point values, and makes various SMT-solver tests pass as the
SMT back-end is more strict about typing and therefore was more
frequently affected by this bug.
To make all this work it was also necessary to extend and fix the
simplifier's handling of bv_typet expressions, and also cover one more
case of type casts in the bitvector back-end.
The tests
Array_operations1/test.desc
Float-equality1/test_no_equality.desc
memory_allocation1/test.desc
union12/test.desc
union6/test.desc
union7/test.desc
continue to fail on Windows and thus cannot yet be enabled.
check_for_gdb() could only return true since if the gdb invocation in its body
failed a REQUIRE(...) in its body would fail. This changes check_for_gdb() the
return type of check_for_gdb() to void and refactors its callees.
Previously when sharing_map.erase(key) was called, two traversals of the path to
the leaf to erase were done. One to check whether the key was in the map, and if
it was, a second one to copy and detach the nodes on the path to the leaf to
erase. This commit changes erase() to require that the given key exists in the
map. This simplifies the implementation and avoids two traversals of the path to
the leaf to erase when it is known that the key exists. If it is not known
whether the key exists, sharing_map.has_key(key) should be explicitely called
first.
The data member and the write_* methods of sharing_node_innert and
sharing_node_leaft are made protected and existing external callers are
refactored to not use write_* directly.
This adds a reset() method which clears the contents of the shared pointer.
Furthermore, the code to remove a reference to the pointed-to object is factored
out into a method destruct(). The method is used both by the destructor and by
reset().
Applying CBMC on large code bases
requires sometimes to model a test environment.
Running a program until a certain point and let it
crash, allows to analyze the memory state at this point in time.
In continuation, the memory state might be reconstructed as base for
the test environment model.
By using gdb to analyze the core dump, I don't have to take
care of reading and interpreting the core dump myself.
When passing `assume(symbol == constant)` or `if symbol == constant then GOTO`, we can populate the
constant propagator and value-set accordingly and use that information until the next merge point without
that constraint. We implement this by allocating and defining a fresh L2 generation on this path, which
will be merged as "real", assignment-derived generations are. Symbols are subject to propagation under
the same conditions as they are on assignment (e.g. requiring that they are not subject to concurrent
modification by other threads).
The previous set-up failed to compile (as cudd.h was not found), and
first fixes to make it compile and link resulted in persistent
segmentation faults. These were caused by inconsistent includes as
HAVE_CUDD was only set in selected directories (unlike the CMake
configuration).
Instead the result of from_exprt and the input of as_expr should be
BDDs.
This makes it possible to reuse the same manager for several exprt
conversion and to combine the results obtain from the from_expr
conversion with BDD operations.