Study the Alarms

Understand and Sort the Alarms

The list of all the existing alarms is given in Value Analysis Alarms.

Most of the time understanding the meaning of alarms is relatively easy since the generated assertions, messages, and warnings tend to be quite clear. The matter that requires more effort is understanding whether a given alarm is relevant: can the problem that it indicates actually happen or not. If an alarm is false (which is in fact the most frequent case), the aim is to get rid of it: convince the tool that the corresponding problem cannot occur, so that the alarm stops being emitted. Finding out exactly where an alarm comes from is essential to this end.

False alarms are often a result of a too high level of approximation in the analysis. It is recommended to treat the alarms starting from the first one in order to detect an imprecision as soon as possible.

The list of the generated assertions can easily be extracted from the properties.csv file (see Information about the Properties). Then, for instance, these assertions can be counted in order to track the evolution of their total number during the working process. (Note, however, that this particular measure is not necessarily very pertinent, because the relation between problems and emitted alarms is not really one-to-one. Losing precision at one point of the analysis can lead to several alarms which have the same origin. Moreover, solving one problem may cause many unrelated new alarms, as several problems might have been hiding behind the solved one.)

The GUI is a good place to start studying the alarms by exploring the data values. As said before, the list of all the properties discovered during the analysis can be found in the Properties of the GUI, and there is a button which allows to select the alarms among all the properties. Start investigating with the first emitted alarm by sorting them by their emission rank.

About the Value Analysis

Understanding better how the value analysis works, and how to tune its options, helps greatly in dealing with the alarms.

Value analysis uses abstract interpretation techniques that propagate the information forward through the analyzed application’s control flow in order to compute states at each reachable program point. A state is an over-approximation of all the possible values that the variables can hold at a given program point. You can imagine a state as a mapping between the variables and sets of values (keep in mind though that in reality it is a little more complex than that). For instance, the sets of values of integer variables can be represented by integer intervals. For a detailed description of the representation of values, see Value Analysis Data Representation.

See Tune the Precision for explanations concerning tuning the precision level with the -slevel option. The precision level is related with the number of states that can be stored for each program point. The smaller this number is, the more coarse the approximation is, as more computed states are merged together.

Example:

  //@ assert 0 < x < 10;
  if (x < 5)
     y = 5;
  else
     y = 10;
L:

Computing a unique state at label L only gives that x [1..9] and y [5..10]. But if the slevel is larger, then two states can be stored at L giving exactly that either y == 5 when x [1..4] or y == 10 when x [5..9].

Notice that the assertion assert 0 < x < 10 above reduces the possible values for x. It is important to remember that this works in the same way for the assertions automatically generated from alarms. For instance, if a statement a = b / c; is reached with c [0..100], an alarm is emitted in form of an appropriate assertion:

/*@ assert Value: division_by_zero: c ≢ 0; */

Then, the analysis continues with context where c [1..100].

Propagate values separately

Beside conditions in the code that automatically split the states as above (when there is enough slevel), some builtins are available to generate more than one state.

The builtin tis_variable_split (a, sz, n); splits the state on the data which address is a and the size sz if it holds less than n values. For instance, if f returns x [1..5] in the following code, the call to tis_variable_split generates five states, one for each value of x:

int x = f();
tis_variable_split (&x, sizeof(x), 10);

Moreover, the builtin tis_interval_split(l, u) does the same thing that tis_interval(l, u) does, but it automatically causes the individual values between l and u inclusive to be propagated separately. The slevel option must then be set high enough to keep the precision of the analysis.

In the following example, since all values of n are propagated separately, the analyzer is able to guarantee that the x_pos assertion holds.

#include <tis_builtin.h>

int G;

void foo(int n)
{
    int x = G;
    /*@ assert x_pos: 1 <= x <= 10; */
}

int main(void)
{
    int n = tis_interval_split(-10, 10);
    G = n;
    if (n > 0)
        foo(n + 5);
    return 0;
}

Remove Alarms by Tuning the Analysis

Tuning the -slevel and -slevel-function options to control the precision of the analysis, using the information obtained from the -tis option’s output, has already been explained in Tune the Precision.

In the GUI, statements where the precision level (i.e. the slevel value) has been reached are indicated by a colored margin in source view windows (see Interactive Code Panel). Moreover, the reached value can be sorted in the statements window (Statements).

The analysis can also get imprecise for other reasons than reaching the defined precision level. Especially this is the case when the log trace include messages about garbled mix values. It is very likely that if such messages appear, the analysis will not produce any interesting results.

Tip

The analysis can be stopped automatically from the command line option -val-stop-at-nth-garbled.

This option can also be set from the GUI.

The analysis can also be stopped when it reaches the maximum memory consumption set (in GiB) in the environment variable TIS_KERNEL_MAX_MEM. If TIS_KERNEL_MAX_MEM is set, the analyzer becomes more conservative in its use of memory when it reaches TIS_KERNEL_MAX_MEM/2 GiB of memory, and the analysis degenerates when it reaches TIS_KERNEL_MAX_MEM GiB of memory. On a single-user TIS Box with 64GiB of memory, a good value for this variable is 62.

There are several other options that can be used to fine tune the value analysis.

Some of them work similarly to the -slevel option, i.e. they control the maximum number of states that can be kept separated. For instance:

  • -val-split-return-function f:n: split the return states of function f according to \result == n and \result != n;

  • -val-split-return auto: automatically split the states at the end of each function according to the function return code;

  • -val-split-return-function f:full: keeps all the computed states at the end of function f in the callers;

  • -val-slevel-merge-after-loop <f | @all> when set, the different execution paths that originate from the body of a loop are merged before entering the next execution.

    The default behavior is set to all functions (-val-slevel-merge-after-loop=@all). It can be removed for some functions (-val-slevel-merge-after-loop=@all,-f), and deactivated for all functions (-val-slevel-merge-after-loop=-@all), and activated for some (-val-slevel-merge-after-loop=-@all,+f).

  • -wlevel n: do n loop iterations before widening.

Some other options can be used to control the precision of the representation of a value (rather than the number of states). For instance:

  • -val-ilevel n: Sets the precision level for integer representation to n: each integer value is represented by a set of enumerated values up to n elements and above this number intervals (with congruence information) are used.
  • -plevel n: Sets the precision level for array accesses to n: array accesses are precise as long as the interval for the index contains less than n values. See Tuning the precision for array accesses for more information about this option.

There are also options which allow to enable / disable certain alarms. Most of these options are enabled by default, and it is usually safer to leave it this way (unless you really know what you are doing).

Of course not all existing options have been enumerated here. The full list of the available options is given by -value-help.

Tuning the precision for array accesses

As explained in Tune the Precision, temporarily reducing the size of the arrays may be a first step during the interactive phase to make the analysis time shorter. But when analyzing large arrays, the -plevel option can be used to increase the precision level for array accesses. This option sets how hard the analyzer tries to be precise for memory accesses that, considering the imprecision on the indexes involved, can be at any of many offsets of a memory block. The default is 200 and may not be sufficient for an access at unknown indices inside a large or nested array to produce a precise result. This is illustrated by the example below:

#include <tis_builtin.h>

char c[20];
struct s { int m[50]; void *p; };
struct s t[60];

void init(void) {
  for (int i = 0; i < 60; i++)
    {
      for (int j = 0; j < 50; j++)
        t[i].m[j] = (i + j) % 10;
      t[i].p = c + i % 20;
    }
}

int main(void) {
  init();
  int x = tis_interval(5, 45);
  int y = tis_interval(2, 56);
  t[y].m[x] = -1;

  x = tis_interval(5, 45);
  y = tis_interval(2, 56);
  int result = t[y].m[x];

  int *p = &t[y].m[x];
  int result2 = *p;
}

With the default value of -plevel (200):

$ tis-analyzer -val -slevel-function init:10000 nestedarrays.c
  x ∈ [5..45]
  y ∈ [2..56]
  result ∈ {{ NULL + [-1..9] ; &c + [0..19] }}
  p ∈ {{ &t + [428..11604],0%4 }}
  result2 ∈ {{ NULL + [-1..9] ; &c + [0..19] }}

With higher plevel:

$ tis-analyzer -val -slevel-function init:10000 nestedarrays.c -plevel 3000
  x ∈ [5..45]
  y ∈ [2..56]
  result ∈ [-1..9]
  p ∈ {{ &t + [428..11604],0%4 }}
  result2 ∈ {{ NULL + [-1..9] ; &c + [0..19] }}

Note that result2 is not precise even with the higher plevel. Handling the lvalue t[y].m[x] directly allows the analyzer to be optimal as long as the value of the plevel option allows it to, but forcing the analyzer to represent the address as the value of the variable p produces this set of offsets:

{{ &t + [428..11604],0%4 }}

This set of offsets contains pretty much the addresses of everything in t, including the p pointer members, so it appears when dereferencing p that the result can be an address.

Trading memory for analysis speed

As explained in Tune the Precision, there is a trade-off between the precision of the analysis and the time it takes. There is also another one between the memory used to store intermediate results and the time it takes to recompute them.

The environment variable TIS_KERNEL_MEMORY_FOOTPRINT can be used to set the size of caches used during the value analysis, speeding up some slow analyses in which the caches were getting thrashed. The default is 2 and each incrementation doubles the size of caches. Only use this variable if you are not already low on memory.

Another useful option which helps in reducing the computation time is -memexec-all. If this option is set, when analyzing a function, the tool tries to reuse results from the analysis of previous calls when possible.

Control the Analyzer

Enhance the Trace

It is not necessary to wait until the analysis is finished in order to examine the computed values. It is also possible to inspect the values of variables during an ongoing analysis by printing messages to the standard output or to a log file. This way one can keep an eye on what is going on.

First of all, the standard printf function can be used to output constant messages or messages involving only precise values. However, printing the computed values when they are imprecise is not possible using printf, and instead the tis_show_each function should be used. The name of this function can be extended with any string, so that is is easier to make the difference between different calls (as the full function name is printed each time it is called). For instance:

tis_show_each_iteration (i, t[i]);

The statement above will output messages like this (one for each analyzed call):

[value] Called tis_show_each_iteration({0; 1; 2; 3; 4}, [0..10])

Another useful function, which properly handles imprecise values, is tis_print_subexps. When given any number of expressions as arguments, it will print the values of all the sub-expressions of each provided expression. The first argument of this function must be always a string literal, which will be printed out in order to help distinguish between different calls. For instance:

tis_print_subexps ("simple sums", x + y, y + z, z + x);

Such a statement will output messages like this:

[value] Values of all the sub-expressions of simple sums (expr 1):
    int x + y ∈ {3}
    int x ∈ {1}
    int y ∈ {2}
[value] Values of all the sub-expressions of simple sums (expr 2):
    int y + z ∈ {5}
    int y ∈ {2}
    int z ∈ {3}
[value] Values of all the sub-expressions of simple sums (expr 3):
    int z + x ∈ {4}
    int x ∈ {1}
    int z ∈ {3}

Moreover, tis_print_abstract_each allows the contents of structured variables to be observed. For instance:

tis_print_abstract_each(&i, &t);

This statement will output messages like this (one for each analyzed call):

[value] Called tis_print_abstract_each:
     i ∈ {0; 1; 2; 3; 4}
     t[0..4] ∈ [0..10]
      [5..99] ∈ UNINITIALIZED

Note that, tis_print_abstract_each in general takes addresses of variables as parameters. This applies as well for an array, such as t in the example above. Contrary to popular belief, when t is an array, &t is not the same as t, and the user should call tis_print_abstract_each(&t); to see the whole array (pointer decay only shows the first element).

To get even more information, the tis_dump_each function can be used to print the whole state at the program point where it is called. But it may be easier to call the tis_dump_each_file function to print the state in a file. The name of the file is computed from the first argument of the call (which must be a string literal), an incremented number, and an optional directory given by the -val-dump-directory option. The -val-dump-destination option allows to choose which kind of output is expected among txt or json (all for both, none for no output). For instance, when calling tis_dump_each_file ("state", *(p+i)) in a test.c file, and analyzing it with the command:

$ tis-analyzer -val test.c -val-dump-destination all -val-dump-directory /tmp

these messages are shown in the trace:

test.c:11:[value] Dumping state in file '/tmp/state_0.txt'
test.c:11:[value] Dumping state in file '/tmp/state_0.json'

The two generated files hold both the whole state computed the first time the program point is reached and the possible values for *(p+i).

For instance, the JSON file may look like:

{
  "file": "test.c",
  "line": 11,
  "args": "([0..10])",
  "state": [
    {
      "base": "t",
      "values": [
        { "offset": "[0..4]", "value": "[0..10]" },
        { "offset": "[5..9]", "value": "UNINITIALIZED" }
      ]
    },
    { "base": "i", "values": [ { "value": "{0; 1; 2; 3; 4}" } ] }
  ]
}

To better understand the results, see the Value Analysis Data Representation section.

Stop the Analyzer when Alarms Occur

In order to avoid wasting time analyzing the application in a wrong context, the analysis can be stopped as soon as some alarms are generated thanks to the -val-stop-at-nth-alarm option. With the argument equal to 1, it aborts the analysis at the first alarm. To ignore a certain number of alarms, the argument can be increased. Although there is no strict relation between the given argument and the number of alarms generated before the analysis stops (i.e. these two values are not necessarily equal), one thing is guaranteed: providing a larger number will lead to skipping more alarms.

Stop the Analyzer When Taking Too Long

The analyzer can also be stopped by sending a USR1 signal to the process. The process identifier (PID) can be found in the trace (unless the -no-print-pid option has been used or the TIS_KERNEL_TEST_MODE environment variable has been set). If the PID is 12345 for instance, the signal can be sent using the kill command:

$ kill -USR1 12345

The analyzer can also be stopped through the GUI (see Disconnect/Kill server).

When the analyzer receives this signal, it stops the Value analysis, but still continues with the other tasks. So for instance, it still save the current state if the -save option has been used. The saved state can then be loaded in the GUI to examine the results obtained so far. Notice that even if there is no more task to do, it can still take some time to properly stop the analysis.

The –timeout option can also be used to get a similar behavior after a given amount of time. For instance, the following command stops the analysis after 5 minutes and saves the results obtained so far in project.state:

$ tis-analyzer --timeout 5m -val ... -save project.state

Watchpoints

Another way to avoid wasting time by analyzing the application in a wrong context is to use watchpoints. Watchpoints make it possible to automatically stop the analysis when some specific memory conditions occur. There are currently five kinds of conditions available for this purpose:

  • tis_watch_cardinal: stop the analysis when the number of different values that a given memory location may possibly contain (because of imprecision) is greater than a certain maximal amount.
  • tis_watch_value: stop the analysis when a given memory location may possibly contain a value of the provided set of forbidden values.
  • tis_watch_address: stop the analysis when a given memory location may possibly contain an address.
  • tis_watch_garbled: stop the analysis when a given memory location may possibly contain a garbled mix value.
  • tis_detect_imprecise_pointer: stop the analysis when any expression is evaluated to an imprecise pointer which contains a given base address.

These functions are available using #include <tis_builtin.h>. The arguments of the four tis_watch_* functions follow the same logic:

  • The two first arguments define the memory location to watch: its address p and its size s.
  • The last argument n is the number of statements during which the condition may remain true before the analysis is stopped:
    • if n == -1, the analysis never stops, but messages are printed each time the condition is reached;
    • if n == 0, the analysis stops as soon as the condition is reached;
    • if n > 0, the analysis continues for the n-th first occurrences where the condition is reached (and prints messages for each of them) and stops at the next occurrence.
  • The meaning of the (potential) arguments in between varies depending on the particular function.

The function tis_detect_imprecise_pointer only takes a pointer as argument.

Each time a call to one of these functions is analyzed, a new watchpoint is set up (if it was not already present). These watchpoints remain active until the end of the analysis. Here is a typical example of using these functions:

 int x = 0; /* the memory location to watch */
 void *p = (void *)&x; /* its address */
 size_t s = sizeof(x); /* its size */
 int y[10];

 /* The analysis stops when x is not exact (i.e. not a singleton value). */
 int maximal_cardinal_allowed = 1;
 tis_watch_cardinal(p, s, maximal_cardinal_allowed, 0);

 /* The analysis stops the fourth time when x may be negative. */
 int forbidden_values = tis_interval(INT_MIN, -1);
 int exceptions = 3;
 tis_watch_value(p, s, forbidden_values, exceptions);

 /* The analysis stops when x may be an address. */
 tis_watch_address(p, s, 0);

/* The analysis stops when x may be a garbled mix value. */
 tis_watch_garbled(p, s, 0);

 p = y;

 /* The analysis starts to detect if an expression is evaluated to an
 imprecise pointer starting at base address &y. */
tis_detect_imprecise_pointer(p);

/* The analysis stops because the expression p+tis_interval_split(0,3)
is evaluated to an imprecise pointer &y + [0..3]. */
*(p+tis_interval(0,3)) = 3;

Remove Alarms by Adding Annotations

Tuning the precision using various analysis options, as previously explained, is one way of removing false alarms. Another way of guiding the analyzer is by adding assertions to the program. Other kinds of properties also can be introduced, but assertions are by far the most frequently used for this purpose.

If you do not know how to add ACSL properties to your project, first read ACSL Properties.

Of course, as the analysis results rely on the properties introduced in this way, they must be properly checked. The best approach is to verify such properties formally using, for instance, the WP plug-in (see Prove Annotations with WP) or other formal tools. When it is not possible, they should be verified manually.

Warning

Annotations that cannot be formally proven have to be carefully reviewed and justified in order to convince any reader that they are indeed true, since all the analysis results rely on them.

Some examples are given below.

If source code modifications are needed, and the source code is in a git repository, the tis-modifications tool may be helpful to track them in order to check that they are correctly guarded. See tis-modifications Manual.

Assertion to Provide Missing Information

As mentioned before, the internal representation of values in the Value plug-in is based on intervals. Unfortunately some relevant information concerning variables just cannot be represented in this form and thus cannot be taken into account by the analyzer when it would make a difference. However, thanks to introducing well placed assertions it is possible to compensate for this disadvantage by explicitly providing the missing information.

Example:

     int T[10];
     ...
L1:  if (0 <= x && x < y) {
       ...
L2:    if (y == 10) {
L3:      ... T[x] ...
         }
       ...
       }

When the T[x] expression is encountered at the label L3, the analyzer tries to check if the T array is accessed correctly (i.e. inside the array bounds), namely, is the (0 <= x < 10) condition true. It already knows that the (0 <= x) part holds, due to the conditional statement at label L1 (assuming that x has not been modified since then). Whatever values x might have had before, at L1 they have been restrained to only non-negative ones and this fact has been stored in the internal state (the interval of values for x was modified) and thus is visible at L3. For example, if before L1 the value of x was [--..--] (i.e. nothing is known about x except that it is initialized), then after L1 it would be [0..--] (i.e. the interval spanning from zero to positive infinity).

Now, the analyzer still needs to verify if (x < 10) also holds. This is obvious for anybody who reads the code: the condition at L1 assures that (x < y) and the condition at L2 assures that (y == 10), therefore (x < 10) must be true. Unfortunately, because of the limitations of the adopted value representation method, the analyzer cannot deduce this by itself. The fact that (x < y) holds just cannot be expressed in the internal state (nor actually any abstract relation between variables). And, supposing that the value of y at L1 is [--..--], the (x < y) condition does not help to restrain the values of x, its upper bound remains thus as before. Hence, this important piece of information is lost and the analyzer simply cannot connect it with (y == 10) at L2 in order to correctly restrain the value of x to [0..9]. So at L3 it will consider that the value of x is [0..--] and it will emit an alarm about a potential out of bound access to the array T.

To help the analyzer, the appropriate assertion can be added explicitly:

at L3: assert ax: x < 10;

Then the alarm will disappear. Of course the ax assertion still needs to be verified by other means. For example, this particular assertion can be easily proven using WP (see Prove Annotations with WP).

State Splitting

State splitting is another assertion-based technique that permits to guide the value analysis. It can be used to obtain the same results in the above example by splitting the internal state at L1 into two states by introducing the following assertion:

at L1: assert ay: y <= 10 || 10 < y;

As explained before, the analyzer can store multiple memory states at each program point (as explained in About the Value Analysis) and the maximal number of states that can be stored per program point in the internal representation is related to the precision level (i.e. slevel). So, provided that the -slevel option has set the precision level high enough in order to permit splitting the state here, the assertion above will lead to a case analysis:

  • The y <= 10 case: As y <= 10 is assumed, together with the x < y condition at L1, it leads to deducing x < 10, and this time this can be represented in the internal state.
  • The 10 < y case: If 10 < y is assumed, then the condition at L2 is false, and therefore the execution branch where the array T is accessed at L3 is not reached at all.

Thanks to introducing this assertion, the alarm will thus disappear. Moreover, the analyzer is able to check on its own that ay is always true, so there is nothing more to verify.

It is worth pointing out that whenever the value analysis encounters an ACSL property, it tries to check its validity.

Tip

Some user annotations can be formally checked using just the value analysis, without the need of employing any other form of verification method (e.g. the WP plug-in).

Store Relational Information

In some cases, the most efficient way to guide the value analysis is to directly add an intermediate variable to the program in order to make it easier to analyze. This method should be usually avoided if possible, since it is intrusive to the application’s code. Thus it should be used only when other solutions are not good enough or when you do not mind modifying the code.

For example, if the program contains a test (0 <= i+j < 10) and then several uses of T[i+j] follow, it may be convenient to add a temporary variable representing the i+j sum:

int tmp = i + j;
if (tmp < 10) {
  ..
  //@ assert a_tmp: tmp == i + j;
  ... T[tmp] ...
  }

If neither i nor j are modified in the meantime, the assertion that validates the code substitution should be trivial to verify, and the value analysis is now able to know that (tmp < 10).

Loop Invariant

Beside assert, requires and ensures, the loop invariant properties are also useful to enhance the analysis precision.

For instance, this function generates an alarm:

int T[100];

//@ requires 0 <= n < 50;
void main(int n)
{
    int i = 0;
    while (i <= n) {
        T[i] = 3;
        i++;
    }
    T[i] = i;
}
warning: accessing out of bounds index [1..127]. assert i < 100;

This is because the value of i is too imprecise when leaving the loop, so the analysis doesn’t know if the access to T[i] in the last assignment is valid or not.

Adding this loop invariant remove the alarm:

/*@ loop invariant i <= 50; */

Moreover, the value analysis is able to check that this property is always valid.

Bounded Buffers

A very common situation is to have a pointer to an array, and an integer that gives the number of remaining bytes between this pointer and the end of the array. In the internal representation of the values, it is not possible to represent relations between these two variables.

buffer problem

Buffer problem: there is a relation between cur and len, but it cannot be represented.

A typical function to handle this buffer is:

void process (char * cur, size_t len) {
  char * p = cur;
  for (size_t i = 0 ; i < len ; i++, p++) {
    *p = ...
  }
}

The validity of the pointer p has to be checked to avoid an alarm on p access, but also to get precise results later on. It is especially important when the pointer points to an array that is part of a larger structure. For instance:

struct data {
  char buffer[BUFFER_LEN];
  unsigned current_index;
  unsigned state;
};

//@ requires 0 <= data->current_index < BUFFER_LEN;
int treatment (struct data * data, int n) {
  char * current = data->buffer + data->current_index;
  size_t length = BUFFER_LEN - data->current_index;
  if (n > length) n = length;
  process (current, n);
  ...
  }

If the analysis is not able to know that p does not go beyond the end of the buffer field, the value of the other fields current_index and state might be modified as well and might be too imprecise for the analysis to give interesting results later on.

So the process function needs a precondition to give the constraint on cur and len to ensure the validity of the pointer. This precondition could simply be:

//@ requires \valid (cur + 0 .. len-1);

Unfortunately, the Value analysis is not able to reduce the input states with this kind of annotation, but it can be translated into a more exploitable equation when one of the data is precise enough to reduce the other:

  • if the data to reduce is the pointer:
//@ requires cur <= \base_addr (cur) + \block_length (cur) - len * sizeof (*cur);
  • if the data to reduce is the length:
//@ requires length <= (\block_length (data) - \offset (data)) / sizeof (*data);

Notice that the ACSL functions \base_addr, \block_length and \offset only provide the expected information when cur is a pointer to an array allocated on its own. If the array is a field in a structure, \base_addr(cur) returns the base address of the structure.

base of a pointer in a struture

Structure with a .buf field: ACSL functions are related to the allocated block, not the internal array.

Anyway in some cases, even if the analyzer computes the optimal information, cur and len both have an unknown value from intervals and the relation between the two variables has been lost. So the memory access to (*p) raises an alarm when we cannot check that adding the upper bound of both intervals is smaller than (buf + BUFFER_LEN). Moreover, if buf is in a structure as explained above, buf and BUFFER_LEN may be unknown in the function.

A trick can be to modify the original function by adding a parameter that gives a pointer in the object beyond which the function is not expected to access:

/*@ requires f_r_buf: val: cur <= bound;
    requires f_r_len: wp: len <= bound - cur;
*/
void process_bounded (char * cur, size_t len, char * bound) {
  char * p = cur;
  //@ loop invariant f_l_p: val: p <= bound;
  for (size_t i = 0 ; i < len ; i++, p++) {
    if (p >= bound) break;
    *p = ...
  }
}

In the previous example, the call to process would have to be changed to:

process_bounded (current, n, data->buffer + BUFFER_LEN);

As long as the preconditions are true, this modified function is equivalent to the original one. This first precondition is often checked by the Value analysis, when it is not, the value analysis reduces the range of cur. The value analysis can use the second precondition to reduce the length.

Two annotated functions with such bounds, tis_memset_bounded and tis_memcpy_bounded, are provided to be used instead of memset and memcpy when this problem occurs with these libc functions.