Active 3-D vision systems, such as laser scanners and structured light systems, obtain the object coordinates from external information such as scanning angles, time of flight, or shape of projected patterns. Passive systems require well-defined features such as targets and edges and are affected by ambient light. Therefore, they have difficulty with sculptured surfaces and unstructured environments. Active systems provide their own illumination so they can easily measure surfaces in most environments. However, their accuracy drops when measurements are performed on objects with sharp discontinuities such as edges, holes, and targets. In most applications, measurements on both surfaces and on these types of features are all required to completely describe an object. This means that systems based on only range or intensity will not provide sufficient data for these applications. The integration of range and intensity data to improve vision-based three-dimensional measurement is therefore required. In addition, a user must contend with the fact that the accuracy obtained from the various types of vision systems, as a function of the viewing volume, has significantly different behaviours. Therefore, each type of sensor is more suited for a specific type of object or scene. The techniques described in this paper to measure the test scenes integrate the registered range and intensity data produced by these range cameras with the objective to provide highly accurate dimensional measurements. The range cameras, the calibration procedures, and results of measurements on test objects are presented in the paper.