- Invited speakers
University Jean Monnet, FR
In this talk I’ll show how color perception (related to human perception and cognition, and laws of physics), and depth estimation can help to infer scene semantics. Color and depth can also help to simplify some computer vision tasks such as objects detection or objects classification. In the last ten years considerable progress has been made on many sub-problems of the overall 3D scene understanding problem thanks to the diffusion of depth-RGB (RGB-D) cameras. In this talk I’ll highlight recent progress on depth-RGB -based models, on our attempt towards 3D scene understanding, as well as on our work towards object recognition from features correspondence and color mapping.
Inspired by the ability of humans to interpret and understand 3D scenes nearly effortlessly, I’ll show how geometrical-based assumptions (such as planar surfaces assumption, Manhattan World Assumption, bounding box assumption) or cognition-based assumptions (related to structured labels or semantic labels, color memory) can be enhanced/reinforced by perception-based assumptions (related to color appearance, color constancy, photometric invariance).