Conseil national de recherches du Canada. Technologies de l'information et des communications
Augmented reality; Cameras; Multimedia systems; Pixels; Signal detection; Three dimensional computer graphics; Video cameras; 3D reconstruction; Camera-projector systems; Change detection; Copyright protections; Digital video cameras; Stand-off distance (SoD); Structured Light; Un-calibrated camera; Copyrights
This paper presents a camera-projector system which allows for 3D reconstruction and copyright protection. Our approach relies on video-embedded binary patterns whose proportion of black and white at each pixel correspond to the original grayscale value. When projected at a sufficiently high frame rate, these binary patterns are seamless to the observer and look as if a normal video was projected. These patterns have been designed to encode pixel position based on a collection of on-off/off-on transitions. When the camera is properly adjusted, these temporal transitions can be detected with a change detection method and converted into a 3D surface. However, when filmed by an uncalibrated camera, these transitions are not only unreadable, they induce spatial parasitic patterns in the recorded video. Our method is motivated by two families of applications. First, all applications for which a creative visual work needs to be pre-warped according to a 3D model to prevent geometric distortion when displayed on a dynamic scene. Typical examples include augmented reality and plays involving artistic staging. Second, all applications for which a multimedia copyrighted document (e.g. a movie projected in a theater) shall not be copied integrally with a hand-held digital video camera. The interference between the on-off/off-on transitions and the acquisition rate of most consumer-grade cameras creates disturbing visual artifacts. The system can even be adjusted to make sure a 'VOID' pattern appears in the recorded video. In this paper, we present how video frames are encoded and decoded and how the detection (and the nondetection) of temporal transitions allows for 3D reconstruction and copyright protection at the same time. Experimental results reveal that our system computes dense 3D maps at a rate of 11.25 fps and with an accuracy of approximately ±1/2 pixel, i.e. 56 microns when in focus and using a standoff distance of 75 cm.