Evaluating and Tuning a Static Analysis to Find Null Pointer Bugs

Author(s): David Hovemeyer, Jaime Spacco, William Hugh
Venue: ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering
Date: 2005


Quality
3

Language: Java
Defect Type: Null pointer exceptions
Uses Annotations: Yes
Requires Annotations: No
Sound: No
Flow-sensitive: Yes
Context-sensitive: Yes

The authors of this paper present a new method for finding null pointer exception errors in Java code. They observe that the majority of null pointer exception bugs come from the same set of common mistakes. Because of this, their method for finding these bugs employs a set of simple, fast analysis techniques instead of more rigorous techniques. The authors point out that since their technique only targets bugs using simple heuristics, they will likely miss some real, subtle bugs. Their technique works at the bytecode level, and classifies all method parameter, local variable, and stack references into categories at each point in the bytecode. These categories represent whether the reference is known to be null or non-null and how the reference’s value is known (for instance, directly assigning null to a reference, or having compared the reference value to null in a previous instruction). The method reports warnings to the user based on a few heuristics, such as checking a reference for null and then using it (if the programmer put a == null when they really meant a != null).

The authors experimented with their heuristic techniques on a number of student projects and Eclipse 3.0.1. They found that without annotations supplied by the programmer, they missed 70 – 98% of null pointer exceptions in student code, although the false positive rate was nearly zero. When they added annotations to give hints to the analysis tool, they only missed 20 – 46% of the null pointer exceptions. The annotations also increased the false positive rate to 10 – 21%. When the authors used their technique on a larger production system (Eclipse 3.0.1), they found that certain heuristics were very accurate (18% false positives) but others had a much lower rate (53%). They avoid making conclusions about the usefulness of their technique on production systems in general, but speculate that adding annotations would drastically reduce the rate of false positives.

0