||The goal of this module is to learn the principles of the 3D reconstruction of an object or a scene from multiple images or stereoscopic videos. For that, the basic concepts of the projective geometry and the 3-space are firstly introduced. The rest of the theoretical aspects and applications are built upon these basic tools. The mapping from the 3D world to the image plane will be studied, for that we will introduce different camera models, their parameters and how to estimate them (camera calibration and auto-calibration). The geometry that relates two pair of views will be analyzed. All these concepts will be applied to obtain a 3D reconstruction in the possible settings: calibrated or uncalibrated cameras. In particular, we will learn how to: estimate the depth of image points, extract the underlying 3D points given a set of point correspondences in the images, generate novel views, estimate the 3D object given a set of calibrated color images or binary images, and estimate a sparse set of 3D points given a set of uncalibrated images. The 3D representation in voxels and meshes will be studied. Finally, we will explain the reconstruction and modeling from Kinect data, as a particular model of sensors that provide and image of the scene together with its depths. The concepts and techniques learnt in this module are used in real applications ranging from augmented reality, object scanning, motion capture, new view synthesis, bullet-time effect, robotics, etc.