Description and representation of images
In computer vision, to address high level semantic tasks as image recognition or scene analysis, modern approaches rely on new methods to describe and represent the visual information. In this research area we develop full representation models and algorithms including feature detection, local descriptions, statistical representation, machine learning, deep networks. We consider color, RGBD, infrared images or videos, and address applications as health, digital technologies or cultural heritage.
The main issue addressed here is image understanding i.e. extracting useful information in digital images and transfer it into relevant description models. These models provide access to high level semantic tasks as image recognition or scene analysis and rely on image processing, statistics and machine learning.
Our contribution is to propose full representation models and algorithms including feature extraction, local descriptions and statistical representation to address variety of computer vision problems as: image classification, object detection and recognition, 2D or 3D segmentation, registration,...
The main challenge is to chose adequate features and representation models to correctly extract and describe relevant information. Our approaches mostly rely on local features (SIFT or HOG features), probabilistic models (mutual information, density estimation) and machine learning (SVM or deep neural networks).
We tackle various application domains as health, digital technologies or cultural heritage. Hereafter are given examples of topics we addressed: