Tutorials

Title Active Vision and Human Robot Collaboration
Speaker Dimitri OgnibeneFiora PirriGuido De CroonLucas PalettaMario CeresaManuela ChessaFabio Solari, Antonino Furnari
Date Sept. 11, 2017
Location Room 8
Time 9,00 – 13,00 with coffee break at 10,30-11,00
Web Page https://sites.google.com/site/avhrc2017/
Time 15,00 – 18,00 with coffee break at 16,30-17,00
Abstract Unstructured social environments, e.g. building sites, release an overwhelming amount of information yet behaviorally relevant variables may be not directly accessible.

Currently proposed solutions for specific tasks, e.g. autonomous cars, usually employ over redundant, expensive and computationally demanding sensory systems which attempt to cover the wide set of sensing conditions which the system may have to deal with.

Active control of the sensors and of the perception process, Active Perception (AP), is a key solution found by nature to cope with such problems, as shown by the foveal anatomy of the eye and its high mobility and control accuracy. The design principles of systems that adaptively find and selects relevant information are important for both Robotics and Cognitive Neuroscience.

At the same time, collaborative robotics has recently progressed to human-robot interaction in real manufacturing. Measuring and modeling of human task specific gaze behaviour is mandatory for smooth human robot interaction supported.

Human-related variables that are related to human attention processes are essential for the evaluation of human-robot interaction metrics. Moreover, anticipatory control for human-in-the-loop architectures, which enable robots to proactively collaborate with humans, heavily relies on observed gaze and actions patterns of their human partners according.

The tutorial will describe several systems employing active vision to support robot behavior and their collaboration with humans.

The systems described employ different strategies:

  1. model based systems using information theoretical measures to select perception parameters;
  2. neural and bio-inspired perception controllers trained to support task execution;
  3. imitation based attention

Distinct complexities and corresponding solution are posed by different settings and tasks. The tutorial will present architectural designs and signal processing methods for active vision systems employed in:

  1. Disaster sites exploration
  2. Human robot collaboration in industrial tasks
  3. Smart Surgical room
  4. Light AUV Navigation
  5. Humanoid Companions
  6. Inspection and object recognition

 

Title Humans through the eyes of a robot: how human social cognition could shape computer vision
Speaker Nicoletta NocetiAlessandra Sciutti
Date Sept. 12, 2017
Location Room 9
Time 9,00 – 13,00 with coffee break at 10,30-11,00
Abstract The new frontiers of robotics research foresee future scenarios where artificial agents will be more and more participating to our daily life activities. If nowadays the presence in our house of robotic devices is limited to vacuum cleaners, pool cleaners and lawn mowers, it is plausible we will experience an extraordinary growth of robotics demand in the consumer sector. According to the EU Strategic Road Map 2014-2020, robotics applications are expected to influence not only domestic activities, but also entertainment, education, monitoring, security and assistive living. This will lead robots to frequent interactions with untrained humans in unstructured environments. The success of the integration of robots in our everyday life is then subordinated to the acceptance of these novel tools by the population. The level of comfort and safety experienced by the users during the interaction plays a fundamental role in this process. Hence, a  key challenge in current robotics has become to maximize the naturalness of human-robot interaction (HRI), to foster a pleasant collaboration with potential non-expert users. One possible approach to this goal is drawing inspiration from human-human interaction. Actually, humans have the ability of reading imperceptible signals hidden in others’ movements that reveal their goals and emotional status. This mechanism supports mutual adaptation, synchronization and anticipation, which cut drastically the delays and the need of complex verbal instructions in the interaction and result in seamless and efficient collaboration. In this tutorial we will discuss some guidelines for the design and the implementation of effective and natural HRI, that stems in the principles governing human-human interaction and its development since birth. To this aim, we will discuss the strong interconnections between applied robotics and neuro and cognitive science, showing that the development of human perception may be a rich source of inspiration for the design of intelligent robots able to proficiently understand and collaborate with humans. Particular emphasis will be given to motion analysis, discussing tasks addressed in this domain, methodologies, challenges and open questions, while delineating possible research lines for future developments.

 

Title Virtual Cell Imaging (methods and principles)
Speaker David Svoboda
Date Sept. 12, 2017
Location Room 4
Time 9,00 – 13,00 with coffee break at 10,30-11,00
Abstract The interdisciplinary research connecting the pure image processing and pure biology/medicine brings many  challenging tasks. The tasks are highly practically oriented and their solution have a direct impact on the development   of some disease treatments or drugs development, for example. This talk aims at those students/researchers who plan joining some application-oriented research groups, where the segmentation or tracking methods for the proper analysis of fixed of living cells are developed or utilized. The attendees of this tutorial will be not only able to know and use the commonly available simulation toolkits or the benchmark image data produced by these toolkit to verify the accuracy of the inspected image analysis method. They will also understand the principles of these simulation frameworks and will be able to design and implement their own toolkits hand-tailored to their private data.

Title Image Tag Assignment, Refinement and Retrieval
Speaker Xirong LiTiberio UricchioLamberto BallanMarco BertiniCees SnoekAlberto Del Bimbo
Date Sept. 12, 2017
Location Room 8
Time 9,00 – 13,00 with coffee break at 10,30-11,00
Abstract In this half-day tutorial we focus on challenges in content-based image retrieval in the context of social image platforms and automatic image annotation, with a unified review on three closely linked problems in the field, i.e., image tag assignment, tag refinement, and tag-based image retrieval. Existing works in tag assignment, refinement, and retrieval vary in terms of their targeted tasks and methodology, making it non-trivial to interpret them within a unified framework. We reckon that all works rely on the key functionality of tag relevance, i.e., estimating the relevance of a specific tag with respect to the visual content of a given image. Given such a tag relevance function, one can perform tag assignment and refinement by sorting tags in light of the function, and retrieve images by sorting them accordingly. Consequently, we present a taxonomy, which structures the rich literature along two dimensions, namely media and learning. The media dimension characterizes what essential information the tag relevance function exploits, while the learning dimension depicts how such information is exploited. With this taxonomy, we discuss connections and difference between the many methods, their advantages as well as limitations.

A selected set of eleven representative and highly cited works have been implemented and evaluated on the test bed for tag assignment, refinement, and/or retrieval. To facilitate comparisons between the state-of-the-art, we present an open-source test bed comprising source code of these eleven methods and an experimental setup based on four social images datasets and on ImageNet; the testbed can be further expanded and using the proposed experimental setup it becomes possible to easily evaluate new methods. Moreover, we provide a brief live demo session with the methods, software and datasets. For repeatable experiments all data (e.g. features) and code are available online.

 

Workshops

Title First International Workshop on Brain-Inspired Computer Vision (WBICV2017)
Organizers  George AzzopardiLaura Fernández-Robles, Antonio Rodríguez-Sánchez
Date Sept. 11, 2017
Location Room 9
Web page  http://wbicv2017.ai.edu.mt/
Description  The visual perception of a human is a complex process performed by various elements of the visual system of the brain. This remarkable unit of the brain has been used as a source of inspiration for developing algorithms that can be used in computer vision tasks such as finding objects, analysing motion, identifying or detecting instances, reconstructing scenes or restoring images. One of the most challenging goals in computer vision is, therefore, to design and develop algorithms that can process visual information as humans do.

The main aim of WBICV2017 is to bring together researchers from the diverse fields of computer science (pattern recognition, machine learning, artificial intelligence, high performance computing and visualisation) along with the fields of visual perception and visual psychophysics who aim to model different phenomena of the visual system of the brain. We look forward to discussing the current and next generation of brain system modelling for a wide range of vision related applications. This workshop aims to comprise powerful, innovative and modern image analysis algorithms and tools inspired by the function and biology of the visual system of the brain.

The researchers will present their latest progress and discuss novel ideas in the field. Besides the technologies used, emphasis will be given to the precise problem definition, the available benchmark databases, the need of evaluation protocols and procedures in the context of brain-inspired computer vision methods and applications.

Papers are solicited in, but not limited to, the following TOPICS:

  • Mathematical models of visual perception
  • Brain-inspired algorithms
  • Learning: Deep learning, recurrent networks, differentiable neural computers, sparse coding.
  • The appearance of neuronal properties: sparsity and selectivity
  • Circuitry: hierarchical representations and connections between layers.
  • Selecting where to look: saliency, attention and active vision.
  • Hierarchy of visual cortex areas
  • Feedforward, feedback and inhibitory mechanisms
  • Applications: object recognition, object tracking, medical image analysis, contour detection and segmentation
Program  09:20 – 09:30 Opening Remarks

09:30 – 10:30 Invited Speaker – John Tsotsos. York University, Canada. Title “It’s all about the constraints”.

10:30 – 11:00 Coffee Break

11:00 – 13:05 Oral Session I

  • “High-Pass Learning Machine: An Edge Detection Approach “ by Alan Lucas Matias, Saulo Anderson F. Oliveira, Ajalmar R. Rocha Neto and Pedro Pedrosa Rebouças Filho.
  • “A New Objective Supervised Edge Detection Assessment using Hysteresis Thresholds “ by Hasan Abdulrahman, Baptiste Magnier and Philippe Montesinos.
  • “Modelling of Poggendorff illusion via Sub-Riemannian Geodesics in SE(2)” by Benedetta Franceschiello, Alexey Mashtakov, Giovanna Citti and Alessandro Sarti.
  • “The Fusion of Optical and Orientation Information in a Markovian Framework for 3D Object Retrieval” by  Laszlo Czuni and Metwally Rashad.
  • “Ventral Stream-Inspired Process for Deriving 3D Models from Video Sequences“ by Julius Schöning and Gunther Heidemann.

13:05 – 14:30 Lunch break

14:30 – 15:30  Invited Speaker – Nicolai Petkov, University of Groningen, the Netherlands. Title “Representation learning with trainable COSFIRE filters”.

15:30 – 16:20  Oral Session II

  • “Learning Motion from Temporal Coincidences” by Christian Conrad and Rudolf Mester.
  • “Adaptive Motion Pooling and Diffusion for Optical Flow Computation” by N. V. Kartheek Medathati, Manuela Chessa, Guillaume S. Masson, Pierre Kornprobst and Fabio Solari.

16:20  – 16:30 Closing Remarks

16:30 – 17:00  Coffee Break

 

Title Social Signal Processing and Beyond (SSPandBE 2017)
Organizers Mariella DimiccoliPetia Ivanova RadevaMarco Cristani
Date Sept. 11, 2017
Location Room 7
Web page http://www.ub.edu/cvub/SSPandBE/
Description The workshop provides a forum for presenting novel ideas and discussing future directions in the emerging areas of social signal processing in uncontrolled and virtual scenarios. It especially focuses on the interplay between computer vision, pattern recognition, social and psychological sciences. We strongly encourage papers covering topics coming from both the realms of social sciences and computer vision, proposing an original approach that takes from both the worlds. Furthermore, we invite contributions on the more ambitious topics of everyday interactions from wearable cameras, groups and crowd, social interactions in a “virtual” setting, unconventional social signals such as illumination and type of architecture.

Finally, the workshop will also feature an interactive session to explore existing and emerging research problems in the areas of interest for the workshop.

The relevant topics of interest for SSPANDBE include but are not limited to:

  • Multi-person/group/crowd interaction analysis
  • Situation awareness and understanding
  • First-person social interactions
  • Socially immersed first person cameras
  • Crowd/group analysis and simulation
  • Social scene and social context understanding
  • Social force models

The major criteria for the selection of papers will be their potential to generate discussion and influence future research directions. Papers have to present original research contributions not concurrently submitted elsewhere. Any paper published by the ACM, IEEE, etc. which can be properly cited constitutes research which must be considered in judging the novelty of a SSPandBE submission, whether the published paper was in a conference, journal, or workshop. Therefore, any paper previously published as part of a SSPandBE workshop must be referenced and suitably extended with new content to qualify as a new submission to the Research Track at the SSPandBE conference.

Paper submission is single blind and will be handled via EasyChair

For any question about the call for papers please contact sspandbe@easychair.org

Program 9:15-9:30  Welcome and opening

9:30-10:30 Keynote speech: Computer vision meets smart lighting – Fabio Galasso (OSRAM)

10:30-11:00 Coffee break

11:00-13:00 Oral session

  • “Serious Games Application for Memory Training Using Egocentric Images” by Gabriel Oliveira-Barra, Marc Bolaños, Estefania Talavera, Adrián Dueñas, Olga Gelonch and Maite Garolera.
  • “Indirect Match Highlights Detection with Deep Convolutional Neural Networks” by Marco Godi, Paolo Rota and Francesco Setti.
  • “Signal Processing and Machine Learning for Diplegia Classification” by Luca Bergamini, Simone Calderara, Nicola Bicocchi, Alberto Ferrari and Giorgio Vitetta.
  • “Analysing First-Person Stories Based on Socializing, Eating and Sedentary Patterns” by Pedro Herruzo, Laura Portell, Alberto Soto and Beatriz Remeseiro.
  • “Implicit vs Explicit Human Feedback for Interactive Video Object Segmentation” by Francesca Murabito, Simone Palazzo, Concetto Spampinato and Daniela Giordano.
  • “Don’t turn off the lights”: Modelling of human light interaction in indoor environments” by Irtiza Hasan, Theodore Tsesmelis, Alessio Del Bue, Fabio Galasso and Marco Cristani.

13:00-13:30 Invited talk, Davide Bennato (University of Catania) title: Body, frame, actor, ethics. A social signal processing research agenda from a social science point of view.

13:30-14:00 Workshop closing

 

Title Automatic affect analysis and synthesis (3AS)
Organizers Nadia Berthouze, Simone Bianco, Giuseppe Boccignone, Paolo Napoletano
Date Sept. 11, 2017
Location Room 5
Web page  http://www.ivl.disco.unimib.it/w3as/
Description Affective computing is a research field that tries to endow machines with capabilities to recognize, interpret and express emotions. On the one hand, the ability to automatically deal with human emotions is crucial in many  human computer interaction  applications. On the other hand, people express affects through a complex  series of actions relating to facial expression, body movements, gestures, voice prosody accompanied by a variety of physiological signals, such as heart rate and sweat, etc.

Thus,  goals set by affective computing involve a number of challenging issues on how systems should be conceived  built, validated, and compared.

In this perspective, we are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:

  • Facial expression analysis and synthesis;
  • Body gesture and movement recognition;
  • Emotional speech processing;
  • Heart rate monitoring from videos;
  • Emotion analysis from physiological signs;
  • Multimodal affective computing;
  • Affect understanding and synthesis.
  • Computational Visual Aesthetics;
  • Recognition of group emotion;
  • Tools and methods of annotation for provision of emotional corpora;
  • Affective Applications: medical, assistive; virtual reality; entertainment; ambient intelligence, multimodal interfaces;

Selected papers of the workshop will be invited to be extended for a special issue on a leading international journal.

Program 14.15 – 15.00 Oral session

  • “Neonatal Facial Pain Assessment Combining Hand-crafted and Deep Features” by Luigi Celona and Luca Manoni
  • “Taking the hidden route: deep mapping of affect via 3D neural networks” by Raffaella Lanzarotti, Claudio Ceruti, Vittorio Cuculo, Alessandro D’Amelio, Giuliano Grossi
  • “A note on modelling a somatic motor space for affective facial expressions” by Raffaella Lanzarotti, Vittorio Cuculo, Alessandro D’Amelio, Giuliano Grossi, Jianyi Lin

15.00 – 15.45 Invited Speaker: Concetto Spampinato, Talk Title: Human-Based Computer Vision: A Brain-Driven Visual Classifier

15.45 – 16.30 Oral session II

  • “An affective BCI driven by self-induced emotions for people with severe neurological disorders “ by Giuseppe Placidi, Luigi Cinque, Paolo Di Giamberardino, Daniela Iacoviello, Matteo Spezialetti
  • “Face Tracking and Respiratory Signal Analysis for the Detection of Sleep Apnea in Thermal Infrared Videos with Head Movement” by Marcin Kopaczka, Özcan Özkan, Dorit Merhof
  • “MOOGA Parameter Optimization for Onset Detection in EMG Signals” by Mateusz Magda, Antonio Martinez-Alvarez, Sergio Cuenca-Asensi

16.30-17.00 Coffee Break

17.00 – 17.45 Invited Speaker: Nicu Sebe, Talk Title: Multimodal Social Signals Analysis

17.45 Open discussion

 

Title Background learning for detection and tracking from RGBD Videos
Organizers Massimo CamplaniLucia MaddalenaLuis Salgado
Date Sept. 11, 2017
Location Room 6
Web page  http://rgbd2017.na.icar.cnr.it/
Description The advent of low cost RGB-D sensors such as Microsoft’s Kinect or Asus’s Xtion Pro is completely changing the computer vision world, as they are being successfully used in several applications and research areas. Many of these applications, such as gaming or human computer interaction systems, rely on the efficiency of learning a scene background model for detecting and tracking moving objects, to be further processed and analyzed. Depth data is particularly attractive and suitable for applications based on moving objects detection, since they are not affected by several problems typical of color based imagery. However, depth data suffer from other type of problems, such as depth-camouflage or depth sensor noisy measurements, which bound the efficiency of depth-only based background modeling approaches. The complementary nature of color and depth synchronized information acquired with RGB-D sensors poses new challenges and design opportunities. New strategies are required that explore the effectiveness of the combination of depth and color based features, or their joint incorporation into well known moving object detection and tracking frameworks.

The aim of the Workshop is to bring together researchers interested in background learning for detection and tracking from RGBD videos, in order to disseminate their most recent research results, advocate and promote the research in this area, discuss rigorously and systematically potential solutions and challenges, promote new collaborations among researchers working in different application areas, share innovative ideas and solutions for exploiting the potential synergies emerging from the integration of different application domains.

The workshop comes with the companion SBM-RGBD Challenge specifically devoted to scene background modeling from RGBD videos, aiming at advancing the development of related algorithms and methods through objective evaluation on a common dataset and common metrics.

Program 15:00-15:10 Opening

15:10-16:30 Oral Session

  • “People Detection and Tracking from an RGB-D Camera in Top-View Configuration: Review of Challenges and Applications” by Daniele Liciotti, Marina Paolanti, Emanuele Frontoni and Primo Zingaretti
  • “Moving Object Detection on RGB-D Videos Using Graph Regularized Spatiotemporal RPCA” by Sajid Javed, Thierry Bouwmans, Maryam Sultana and Soon Ki Jung
  • “CwisarDH+: Background Detection in RGBD Videos by Learning of Weightless Neural Networks” by Massimo De Gregorio and Maurizio Giordano
  • “Exploiting Color and Depth for Background Subtraction” by Lucia Maddalena and Alfredo Petrosino

16:30-17:00 Coffee break

17:00-17:20 Simple Combination of Appearance and Depth for Foreground Segmentation, Tsubasa Minematsu, Atsushi Shimada, Hideaki Uchiyama and Rin-ichiro Taniguchi

17:20-17:30 Conclusions

 

Title Natural human-computer interaction and ecological perception in immersive virtual and augmented reality (NIVAR2017)
Organizers Manuela ChessaFabio Solari, Jean-Pierre Bresciani
Date Sept. 12, 2017
Location Room 5
Web page nivar2017.wordpress.com
Description Given the recent spread of technologies, devices, systems and models for immersive virtual reality (VR) and augmented reality (AR), which are now effectively employed in various field of applications, an emerging issue is addressing how interaction occurs in such systems. In particular, a key problem is the one of achieving a natural and ecological interaction with the devices typically used for immersive VR and for AR, i.e. interacting with them by using the same strategies and eliciting the same perceptual responses as it occurs when interacting in the real world. This is particularly important when VR and AR systems are used in assistive contexts, e.g. targeting elderly or disable people, or for cognitive and physical rehabilitation, but also to prevent and mitigate visual fatigue and cybersickness when targeting healthy people.
The main scope of this workshop is to put together researchers and practitioners from both Academy and Industry, interested in studying and developing innovative solutions with the aim of achieving a Natural human-computer interaction and an ecological perception in VR and AR systems.Technical topics of interest include (but are not limited to):

  • Natural human-computer interaction in virtual/augmented/mixed reality environments.
  • Ecological validity of virtual/augmented/mixed reality systems and/or human-computer interaction.
  • Hand/ face/body recognition and tracking for human-computer interaction.
  • Action and activity recognition for human-computer interaction.
  • Vision neuroscience for human-computer-interaction.
  • Eye-tracking for human-computer interaction.
  • Computational vision models.
  • Depth (from stereo and/or other cues) and motion (also self-motion) perception in virtual/augmented/mixed reality environments.
  • Rendering in virtual/augmented/mixed reality environments.
  • Misperception issues and undesired effects in visualization devices (e.g., 3D displays, head-mounted displays)
  • Applications based on displays (also S3D), smartphones, tablets, head-mounted displays.
Program 9.15 – 9.30: Welcome and opening remarks

9.30 – 10.30: Invited Speaker: Paolo Pretto – Max Planck Institute for Biological Cybernetics – Germany, Title: How studying human self-motion perception can improve VR technology and viceversa

10.30 – 11.00: Coffe Break and Poster Session

11.00 – 11.30: Alexis Paljic – MINES ParisTech PSL – France, Title: Ecological Validity of Virtual Reality : Three Use Cases

11.30 – 12.30: Invited Speaker: Bruno Herbelin – École Polytechnique Fédérale de Lausanne, Switzerland, Title: Cognitive mechanisms behind embodiment and presence in virtual reality

12.30 – 13.00: Final remarks and Poster Session

 

Title First International Workshop on Biometrics as-a-service: cloud-based technology, systems and applications.
Organizers Silvio BarraArcangelo CastiglioneKim-Kwang Raymond ChooFabio Narducci
Date Sept. 12, 2017
Location Room 7
Web page http://www.biplab.unisa.it/iwbaas/
Description Cloud-based Biometrics is a relatively new topic and solutions by emerging companies, e.g., BioID, ImageWare Systems, Animetrics and IriTech, further confirm the expectations of its rapid growing. Biometrics-as-a-service has the same benefits as any other cloud-based service. It is cost-effective, scalable, reliable and hardware agnostic, making enhanced security accessible anytime and anywhere. Moreover, legal and privacy issues vary from country to country, thus limiting the progress of this branch of the research on cloud computing. We therefore expect the contributions could also shed light on such less explored aspects.

Nowadays, the massive spread of cloud-based systems is leading the service providers to offer more advanced access protocols to their own users, which may overcome the limitations and the weaknesses of the traditional alphanumeric passwords. Experts all over the world are pushing for cloud-based biometric systems, which are supposed to be one of the upcoming research frontier of the next years. Biometric credentials are difficult to be stolen and do not need to be remembered, so making them suitable for on-the-move authentication scenarios, typical of the current mobile age. On the other hand, the remote storage of a biometric trait on the cloud is function creep-prone, i.e. the gradual widening of the use of a technology or system beyond the purpose for which it was originally intended. Legal and security issues related to the abuse & misuse of a biometric trait obstruct the rapid and widespread diffusion of such practice.

The objective of IW-BAAS is to capture the latest advances in this research field, soliciting papers and ideas above the cloud based biometric systems and services. Technical, legal, professional and ethical aspects related to the use of biometrics in cloud environments are also encouraged.

Topics of interest include, but are not limited to, the following:

  • Cloud-based Architectures for Biometric Systems;
  • Cloud-based Communication Protocols for Biometric Systems;
  • Biometric Security and Privacy Policy;
  • Ethical, legal, culture and regulation factors;
  • Biometric Storage in the Cloud;
  • Biometric Access Control of Cloud Data;
  • Mobile Biometrics and Cloud Computing;
  • Liveness/Spoofing Detection for Cloud Applications;
  • Biometric Cryptography;
  • Biometric Encryption in Cloud computing;
  • Biometric Fusion in the Cloud;
  • Smart spaces and Ambient Intelligence Environments;
  • Biometric representation suitable for the Cloud

Special Issues on IEEE Cloud Computing will be devoted to the conference topics and the best selected papers will be considered for publication, as extended versions.

Please note that:

  • papers must have been presented in the conference;
  • papers should have been carefully revised and extended with at least 30% of new original
Program 08:30 Opening Session

09:00 Oral Session I (Chairs: Silvio Barra, Fabio Narducci)

9:00 – 9:30 Gianni Fenu, Mirko Marras, Leveraging Continuous Multi-Modal Authentication for Access Control in Mobile Cloud Environments

9:30 – 10:00 Marek Ogiela Katarzyna Koptyra, Biometric Traits in Multi-Secret Digital Steganography

10:00 – 10-30 Andrea Bruno, Giuseppe Cattaneo, Umberto Ferraro Petrillo, Fabio Narducci, Gianluca Roscigno, Distributed Anti-Plagiarism Checker for Biomedical Images Based on Sensor Noise

10:30 – 11:00 Coffee Break

11:00 Special Session (Chair Massimo Tistarelli)

11:00- 13:00 Presentation of the COSMOS multibiometric project.

13:00 – 15:00 Lunch break

15:00 Oral Session II (Chairs: Silvio Barra, Fabio Narducci)

15:00 – 15:30 Soumen Roy, Utpal Roy, Devadatta Sinha, Efficacy of Typing Pattern Analysis in Identifying Soft Biometric Information and Its Impact in User Recognition

15:30 – 16:00 Maria De Marsico, Eugenio Nemmi, Bardh Prenkaj, Gabriele Saturni, A Smart Peephole on the Cloud

16:00 – 16:30 Raffaele Montella, Alfredo Petrosino, Vincenzo Santopietro, WhoAreYou (WAY): a mobile CUDA powered picture ID Card recognition system

16:30 – 17:00 Coffee Break

17:00 Oral Session III (Chairs: Silvio Barra, Fabio Narducci)

17:00 – 17:30 Invited Speaker – Kim-Kwang Raymond Choo, University of Texas at San Antonio,Texas, USA

17:30 – 18:00 Michael Philip Orenda, Lalit Garg, Gaurav Garg. Exploring the feasibility to authenticate users of web and cloud services using a brain-computer interface (BCI)

18:00 Closing Remarks

 

Title Third International Workshop on Multimedia Assisted Dietary Management (MADiMa 2017)
Organizers  Stavroula MougiakakouGiovanni Maria FarinellaKeiji Yanai
Date Sept. 12, 2017
Location Room 6
Web page www.madima.org/
Description The prevention of onset and progression of diet-related acute and chronic diseases (e.g. diabetes, obesity, cardiovascular diseases and cancer) requires reliable and intuitive dietary management. The need for accurate, automatic, real-time and personalized dietary advice has been recently complemented by the advances in computer vision and smartphone technologies, permitting the development of the first mobile food multimedia content analysis applications. The proposed solutions rely on the analysis of multimedia content captured by wearable sensors, smartphone cameras, barcode scanners, RFID readers and IR sensors, along with already established nutritional databases and often require some user input. In the field of nutritional management, multimedia not only bridges diverse information and communication technologies, but also computer science with medicine, nutrition and dietetics. This confluence brings new challenges and opportunities on dietary management.

MADiMa2017 aims to bring together researchers from the diverse fields of engineering, computer science and nutrition who investigate the use of information and communication technologies for better monitoring and management of food intake. The combined use of multimedia, machine learning algorithms, ubiquitous computing and mobile technologies permit the development of applications and systems able to monitor the dietary behavior, analyze food intake, identify eating patterns and provide feedback to the user towards healthier nutrition. The researchers will present their latest progress and discuss novel ideas in the field. Besides the technologies used, emphasis will be given to the precise problem definition, the available nutritional databases, the need for benchmarking multimedia databases of packed and unpacked food and the evaluation protocols.

Topics of interest include (but are not limited to) the following:

  • Ubiquitous and mobile computing for dietary assessment
  • Computer vision for food detection, segmentation and recognition
  • 3D reconstruction for food portion estimation
  • Augmented reality for food portion estimation
  • Wearable sensors for food intake detection
  • Computerized food composition (nutrients, allergens) analysis
  • Multimedia technologies for eating monitoring
  • Smartphone technologies for dietary behavioral patterns
  • Deep Learning for food analysis
  • Food Images and Social Media
  • Food multimedia databases
  • Evaluation protocols of dietary management systems
  • Multimedia assisted self-management of health and disease
Program 09:20 09:30 Workshop Opening

09:30 10:30 Oral Session 1

  • “Distinguishing Nigerian Food Items and Calorie Content with Hyperspectral Imaging” by Xinzuo Wang; Neda Rohani; Adwaiy Manerikar; Aggelos Katsagellos; Oliver Cossairt; Nabil Alshurafa
  • “Building parsimonious SVM models for chewing detection and adapting them to the user” by Iason Karakostas; Vasileios Papapanagiotou; Anastasios Delopoulos
  • “Food Recognition using Fusion of Classifiers based on CNNs (ICIAP2017 presentation)” by Eduardo Aguilar; Marc Bolaños; Petia Radeva

10:30 11:00 Coffee Break

11:00 12:50 Oral Session 2

  • “Learning CNN-based Features for Retrieval of Food Images” by Gianluigi Ciocca; Paolo Napoletano; Raimondo Schettini
  • “A Multimedia Database for Automatic Meal Assessment Systems” by Dario Allegra; Marios Anthimopoulos; Joachim Dehais; Ya Lu; Filippo Stanco; Giovanni Maria Farinella; Stavroula Mougiakakou
  • “Food Ingredients Recognition through Multi-label Learning” by Marc Bolanos; Aina Ferra; Petia Radeva
  • “On Comparing Color Spaces for Food Segmentation” by Sinem Aslan; Gianluigi Ciocca; Raimondo Schettini

12:30 15:30 Lunch Break

15:30 16:30 Oral Session 3

  • “Food Intake Detection from Inertial Sensors using LSTM Networks” by Konstantinos Kyritsis; Christos Diou; Anastasios Delopoulos
  • “Comparison of Two Approaches for Direct Food Calorie Estimation” by Takumi Ege; Keiji Yanai
  • “Personalized Dietary Self-Management using Mobile Vision-based Assistance” by Georg Waltner; Michael Schwarz; Stefan Ladstätter; Anna Weber; Patrick Luley; Meinrad Lindschinger; Irene Schmid; Walter Scheitz; Horst Bischof; Lucas Paletta

16:30 17:00 Coffee Break

17:00 18:00 Poster / Demo Session

  • “Distinguishing Nigerian Food Items and Calorie Content with Hyperspectral Imaging” by Xinzuo Wang; Neda Rohani; Adwaiy Manerikar; Aggelos Katsagellos; Oliver Cossairt; Nabil Alshurafa
  • “Building parsimonious SVM models for chewing detection and adapting them to the user” by Iason Karakostas; Vasileios Papapanagiotou; Anastasios Delopoulos
  • “Food Recognition using Fusion of Classifiers based on CNNs (ICIAP2017 presentation)” by Eduardo Aguilar; Marc Bolaños; Petia Radeva
  • “Learning CNN-based Features for Retrieval of Food Images” by Gianluigi Ciocca; Paolo Napoletano; Raimondo Schettini
  • “A Multimedia Database for Automatic Meal Assessment Systems” by Dario Allegra; Marios Anthimopoulos; Joachim Dehais; Ya Lu; Filippo Stanco; Giovanni Maria Farinella; Stavroula Mougiakakou
  • “Food Ingredients Recognition through Multi-label Learning” by Marc Bolanos; Aina Ferra; Petia Radeva
  • “On Comparing Color Spaces for Food Segmentation” by Sinem Aslan; Gianluigi Ciocca; Raimondo Schettini
  • “Food Intake Detection from Inertial Sensors using LSTM Networks” by Konstantinos Kyritsis; Christos Diou; Anastasios Delopoulos
  • “Comparison of Two Approaches for Direct Food Calorie Estimation” by Takumi Ege; Keiji Yanai
  • “Personalized Dietary Self-Management using Mobile Vision-based Assistance” by Georg Waltner; Michael Schwarz; Stefan Ladstätter; Anna Weber; Patrick Luley; Meinrad Lindschinger; Irene Schmid; Walter Scheitz; Horst Bischof; Lucas Paletta
  • “Understanding Food Images to Recommend Utensils During Meals” by Francesco Ragusa; Antonino Furnari; Giovanni Maria Farinella
  • “Pocket Dietitian: Automated Healthy Dish Recommendations by Location” by Nitish Nag; Vaibhav Pandey; Abhisaar Sharma; Jonathan Lam; Runyi Wang; Ramesh Jain Runyi Wang, and Ramesh Jain

18:00 18:15 Best Paper and Concluding Remarks

Main Conference

Wednesday 13 September

08:30   09:00 Registration
09:00   09:30 Opening and Overview
09:30   10:30 Invited Speaker: Irfan Essa, Georgia Institute of Technology, US

Invited Speaker: Nicu Sebe, University of Trento, IT

Title “Computational Video: Technologies for Analysis, Creation, Enhancement, and Sharing of Video”

Title “Multimodal Social Signals Analysis”

10:30   11:00 Coffee Break
11:00   12:20 Oral Session 1: Face and Body Recognition (Chair Rita Cucchiara)
“Deep Face Model Compression Using Entropy-based Filter Selection” by Bingbing Han, Zhihong Zhang, Chuanyu Xu, Beizhan Wang, Guosheng Hu, Lu Bai, Qingqi Hong, Edwin Hancock
“Emotion Recognition by Body Movement Representation on the Manifold of Symmetric Positive Definite Matrices” by Mohamed Daoudi, Stefano Berretti, Pietro Pala, Yvonne Delevoye, Alberto Del Bimbo
“Virtual EMG via facial video analysis” by Giuseppe Boccignone, Vittorio Cuculo, Giuliano Grossi, Raffaella Lanzarotti, Raffaella Migliaccio
“Person Re-Identification using Partial Least Squares Appearance Modelling” by Gregory Watson, Abhir Bhalerao
12:20   13:20 Oral Session 2: Neural Networks (Chair Alfredo Petrosino)
“Linear Regularized Compression of Deep Convolutional Neural Networks” by Claudio Ceruti, Paola Campadelli, Elena Casiraghi
“Just DIAL: DomaIn Alignment Layers for Unsupervised Domain Adaptation” by Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elena Ricci, Samuel Rota Bulò
“Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture” by Patricia Suarez, Angel Sappa, Boris Vintimilla
13:20   15:00 Lunch
15:00   16:30 Interactive Session 1:
“Multi-stage Neural Networks with Single-sided Classifiers for False Positive Reduction and its Evaluation using Lung X-ray CT Images” by Masaharu Sakamoto, Hiroki Nakano, Kun Zhao, Taro Sekiyama
“Learning from enhanced contextual similarity in brain imaging data for classification of schizophrenia” by Tewodros Mulugeta Dagnew, Letizia Squarcina, Massimo Rivolta, Paolo Brambilla, Roberto Sassi
“One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural Network” by Vedran Vukotić, Silvia-Laura Pintea, Christian Raymond, Guillaume Gravier, Jan van Gemert
“Revisiting Human Action Recognition: Personalization vs. Generalization” by Andrea Zunino, Jacopo Cavazza, Vittorio Murino
“Human action classification using an extended BoW formalism” by Raquel Almeida, Benjamin Bustos, Zenilton Kleber Patrocinio, Silvio Guimaraes
“Graph-based Hierarchical Video Cosegmentation” by Franciele Rodrigues, Pedro Leal, Yukiko Kenmochi, Jean Cousty, Laurent Najman, Silvio Guimaraes, Zenilton Kleber Patrocinio
“How Far Can You Get by Combining Change Detection Algorithms?” by Simone  Bianco, Gianluigi Ciocca, Raimondo Schettini
“Video Saliency detection based on Boolean Map theory” by Rahma Kalboussi, Mehrez Abdellaoui, Ali Douik
“Weighty LBP: a new selection strategy of LBP codes depending on their information content” by Daniel Riccio, Maria De Marsico
“Interest Region based Motion Magnification” by Manisha Verma, Shanmuganathan Raman
“Histological image analysis by invariant descriptors” by Andrea Loddo, Cecilia Di Ruberto, Lorenzo Putzu
“Robust Tracking of Walking Persons by Elite-type Particle Filters and RGB-D Images” by Akari Oshima, Shun’ichi Kaneko, Masaya Itoh
“Network Edge Entropy from Maxwell-Boltzmann Statistics” by Jianjia Wang, Richard Wilson, Edwin Hancock
“Emotion Recognition Based on Occluded Facial Expressions” by Jadisha Cornejo, Helio Pedrini
“Deep Multibranch Neural Network for Painting Categorization” by Simone  Bianco, Davide Mazzini, Raimondo Schettini
“HoP: Histogram of Patterns for Human Action Representation” by Vito Monteleone, Liliana Lo Presti, Marco La Cascia
“360° Tracking using a virtual PTZ Camera” by Marco La Cascia, Luca Greco
“Complexity and Accuracy of Hand-Crafted Detection Methods Compared to Convolutional Neural Networks” by Valeria Tomaselli, Emanuele Plebani, Sebastiano Mauro Strano, Danilo Pau
“Organizing Videos Streams for Clustering and Estimation of Popular Scenes” by Sebastiano Battiato, Giovanni Maria Farinella, Filippo Milotta, Alessandro Ortis, Filippo Stanco, Valeria D’Amico, Luca Addesso, Giovanni Torrisi
“Exploiting context information for image description” by Andrea Apicella, Anna Corazza, Francesco Isgrò, Giuseppe Vettigli
“Investigating the use of space-time primitives to understand human movements” by Damiano Malafronte, Gaurvi Goyal, Alessia Vignolo, Francesca Odone, Nicoletta Noceti
“Visual and Textual Sentiment Analysis of brand-related social media pictures using Deep Convolutional Neural Networks” by Marina Paolanti, Carolin Kaiser, Renè Schallner, Emanuele Frontoni, Primo Zingaretti
“On the Importance of Domain Adaptation in Texture Classification” by Barbara Caputo, Claudio Cusano, Martina Lanzi, Paolo Napoletano, Raimondo Schettini
“A System for Autonomous Landing of a UAV on a Moving Vehicle” by Sebastiano Battiato, Luciano Cantelli, Fabio D’Urso, Giovanni Maria Farinella, Luca Guarnera, Dario Guastella, Donato Melita, Giovanni Muscato, Alessandro Ortis, Francesco Ragusa, Corrado Santoro
“Benchmarking two algorithms for people detection from top-view depth cameras” by Vincenzo Carletti, Luca Del Pizzo, Gennaro Percannella, Mario Vento
“Gesture Modelling and Recognition by Integrating Declarative Models and Pattern Recognition Algorithms” by Giorgio Fumera, Davide Spano, Alessandro Carcangiu, Fabio Roli
“A Rank Aggregation Framework for Video Interestingness Prediction” by Jurandy Almeida, Lucas Valem, Daniel Pedronette
“Indoor actions classification through long short term memory neural networks” by Emanuele Cipolla, Ignazio Infantino, Umberto Maniscalco, Giovanni Pilato, Filippo Vella
“Convex Polytope Ensembles for Spatio-Temporal Anomaly Detection” by Francesco Turchini, Lorenzo Seidenari, Alberto Del Bimbo
“3D object detection method using LiDAR information in multiple frames” by Jung-Un Kim, Jihong Min, Hang-Bong Kang
“Rotation invariant co-occurrence matrix features” by Lorenzo Putzu, Cecilia Di Ruberto
“A Machine Learning Approach for the Online Separation of Handwriting from Freehand Drawing” by Danilo Avola, Marco Bernardi, Luigi Cinque, Gian Luca Foresti, Marco Raoul Marini, Cristiano Massaroni
“A Tensor Framework for Data Stream Clustering and Compression” by Boguslaw Cyganek
“Generating Knowledge-Enriched Image Annotations for Fine-grained Visual Classification” by Francesca Murabito, Simone Palazzo, Concetto Spampinato, Daniela Giordano
“Lifting 2D object detections to 3D: A geometric approach in multiple views” by Cosimo Rubino, Andrea Fusiello, Alessio Del Bue
“A Fully Convolutional Network for Salient Object Detection” by Simone Bianco, Marco Buzzelli, Raimondo Schettini
“A Computer Vision System for Monitoring Ice-Cream Freezers” by Alessandro Torcinovich, Marco Fratton, Marcello Pelillo, Alberto Pravato, Alessandro Roncato
16:30   17:00 Coffee Break
17:00   18:00 Invited Speaker: Roberto Scopigno, ISTI-CNR, Italy
Title: “Visual Technologies for CH: Current Status and Perspectives”
18:30 Palazzo della Cultura: Visit at Escher exhibition
20:00 Welcome Party at “Museo Diocesano” Terrace

Thursday 14 September

09:00   10:00 Invited Speaker: Daniel Cremers, Technische Universität München, DE
Title “Direct Methods for Image-Based 3D Reconstruction & Visual Slam”
10:00   11:00 Oral Session 3: Action Recognition    (Chair Mario Vento)
“A Compact Kernel Approximation for 3D Action Recognition” by Jacopo Cavazza, Pietro Morerio, Vittorio Murino
“Joint orientations from skeleton data for human activity recognition” by Annalisa Franco, Antonio Magnani, Dario Maio
“Discriminative Dictionary Design for Action Classification in Still Images” by Abhinaba Roy, Biplab Banerjee, Vittorio Murino
11:00   11:30 Coffee Break
11:30   12:30 Oral Session 4: Visual Search (Chair Niculae Sebe)
“Learning to Weight Color And Depth for RGB-D Visual Search” by Alioscia Petrelli, Luigi Di Stefano
“Two-Stage Recognition for Oracle Bone Inscriptions” by Lin Meng
“Feature clustering with fading affect bias: building visual vocabularies on the fly” by Ziyin Wang, Gavriil Tsechpenakis
12:30   13:30 Oral Session 5: Special Session Imaging Solutions for Improving the Quality of Life (I-LIFE’17)            (Chairman Dan Popescu)
“Showing Different Images to Observers by using Difference in Retinal Impulse Response” by Daiki Ikeba, Fumihiko Sakaue, Jun Sato, Roberto Cipolla
“Interconnected Neural Networks Based on Voting Scheme and Local Detectors for Retinal Image Analysis and Diagnosis” by Dan Popescu, Traian CARAMIHALE, Loretta Ichim
“Measuring Refractive Properties of Human Vision by Showing 4D Light Fields” by Megumi Hori, Fumihiko Sakaue, Jun Sato, Roberto Cipolla
13:30   15:00 Lunch
15:00   17:00 Interactive Session 2 & Coffee Break:
“Feature Points Densification and Refinement” by Andrey Bushnevskiy, Lorenzo Sorgi, Bodo Rosenhahn
“Join cryptography and digital watermarking for 3D multiresolution meshes security” by Ikbel Sayahi, Akram Elkefi, Chokri Ben Amar
“Towards Video Captioning with Naming: a Novel Dataset and a Multi-Modal Approach” by Stefano Pini, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara
“A Matrix Decomposition Perspective on Calibrated Photometric Stereo” by Luca Magri, Roberto Toldo, Umberto Castellani, Andrea Fusiello
“Optical Coherence Tomography Denoising by Means of a Fourier Butterworth Filter-based Approach” by Gabriela Samagaio, José Joaquim De Moura Ramos, Jorge Novo, Marcos Ortega
“Feature definition and selection for epiretinal membrane characterization in Optical Coherence Tomography images” by Sergio Baamonde, José Joaquim De Moura Ramos, Jorge Novo, José Rouco, Marcos Ortega
“Contactless Physiological Data Analysis for User Quality of Life Improving by Using a Humanoid Social Robot” by Roxana Agrigoroaie, Adriana Tapus
“Efficient confidence measures for embedded stereo” by Matteo Poggi, Fabio Tosi, Stefano Mattoccia
“Adaptive Low Cost Algorithm for Video Stabilization” by Giuseppe Spampinato, Arcangelo Bruna, Filippo Naccari, Valeria Tomaselli
“CNN-based Identification of Hyperspectral Bacterial Signatures for Digital Microbiology” by Giovanni Turra, Simone Arrigoni, Alberto Signoroni
“Kinect-based gait analysis for people recognition over time” by Elena Gianaria, Marco Grangetto, Nello Balossino
“Exploiting Social Images to Understand Tourist Behaviour” by Alessandro Torrisi, Giovanni Gallo, Giovanni Signorello, Giovanni Maria Farinella
“A Hough Voting Strategy for Registering Historical Aerial Images to Present-Day Satellite Imagery” by Sebastian Zambanini, Robert Sablatnig
“A smartphone-based system for detecting falls using anomaly detection” by Vincenzo Carletti, Alessia Saggese, Antonio Greco, Mario Vento
“Deep Appearance Features for Abnormal Behavior Detection in Video” by Sorina Smeureanu, Radu Tudor Ionescu, Marius Popescu, Bogdan Alexe
“Crossing the Road Without Traffic Lights: An Android-based Safety Device” by Adi Perry, Dor Verbin, Nahum Kiryati
“Combining Color Fractal with LBP Information for Flood Segmentation in UAV-based Images” by Dan Popescu, Loretta Ichim
“On the Estimation of Children’s Poses” by Giuseppa Sciortino, Giovanni Maria Farinella, Sebastiano Battiato, Marco Leo, Cosimo Distante
“Fast and Accurate Facial Landmark Localization in Depth Images for In-car Applications” by Elia Frigieri, Guido Borghi, Roberto Vezzani, Rita Cucchiara
“Pixel classification methods to detect skin lesions on dermoscopic medical images” by Fabrizio Balducci, Costantino Grana
“ARCA (Automatic Recognition of Color for Archaeology): a Desktop Application for Munsell Estimation” by Filippo Milotta, Filippo Stanco, Davide Tanasi
“GRAPHJ: A Forensics Tool for Handwriting Analysis” by Luca Guarnera, Giovanni Maria Farinella, Antonino Furnari, Angelo Salici, Claudio Ciampini, Vito Matranga, Sebastiano Battiato
“Wink detection on the eye image as a control tool in multimodal interaction” by Piotr Kowalczyk, Dariusz Sawicki
“Description of Breast Morphology through Bag of Normals Representation” by Dario Allegra, Filippo Milotta, Diego Sinitò, Filippo Stanco, Giovanni Gallo, Wafa Taher, Giuseppe Catanuto
“Smartphone based pupillometry: an empirical evaluation of accuracy and safety” by Sergio Di Martino, Daniel Riccio, Davide Maria Calandra, Antonio Visconti
“Fully-Automated CNN-based Computer Aided Celiac Disease Diagnosis” by Michael Gadermayr, Georg Wimmer, Andreas Uhl, Hubert Kogler, Andreas Vécsei, Dorit Merhof
“Recognizing Context for Privacy Preserving of First Person Vision Image Sequences” by Sebastiano Battiato, Giovanni Maria Farinella, Christian Napoli, Gabriele Nicotra, Salvatore Riccobene
“Bio-Inspired Feed-Forward System for Skin Lesion Analysis, Screening and Follow-up” by Francesco Rundo, Sabrina Conoci, Giuseppe Banna, Filippo Stanco, Sebastiano Battiato
“Remote biometric verication for eLearning applications: where we are” by Pietro Sanna, Gian Luca Marcialis
“An investigation of deep learning for lesions malignancy classification in breast DCE-MRI” by Stefano Marrone, Gabriele Piantadosi, Roberta Fusco, Antonella Petrillo, Mario Sansone, Carlo Sansone
“A Unified Color and Contrast Age-Dependent Visual Content Adaptation” by M’Hand Kedjar, Greg Ward, Hyunjin Yoo, Afsoon Soudi, Tara Akhavan, Carlos Vazquez
“Real Time Indoor 3D Pipeline for an Advanced Sensory Substitution Device” by Anca Morar, Florica Moldoveanu, Lucian Petrescu, Alin Moldoveanu
“H-264/RTSP Multicast Stream Integrity” by Andrea Bruno, Giuseppe Cattaneo, Fabio Petagna
“A Framework for Activity Recognition through Deep Learning and Abnormality Detection in Daily Activities” by Irina Mocanu, Bogdan Cramariuc, Oana Balan, Alin Moldoveanu
“3D Reconstruction from Specialized Wide Field of View Camera System using Unified Spherical Model” by Ahmad Zawawi Jamaluddin, Cansen Jiang, Olivier Morel, Ralph Seulin, David Fofi
“Automated Optic Disc Segmentation using Polar Transform based Adaptive Thresholding for Glaucoma Detection” by Muhammad Nauman Zahoor, Muhammad Moazam Fraz, Arsalan Ahmad
“Automatic Multi-Seed Detection For MR Breast Image Segmentation” by Albert Comelli, Alessandro Bruno, Maria Laura Di Vittorio, Federica Lenzi, Roberto Lagalla, Salvatore Vitabile, Edoardo Ardizzone
“Efficient Image Segmentation in Graphs with Localized Curvilinear Features” by Hans Ccacyahuillca Bejar, Fábio Cappabianco, Paulo Vechiatto de Miranda
“Synchronization in the Symmetric Inverse Semigroup” by Federica Arrigoni, Eleonora Maset, Andrea Fusiello
“No-Reference Learning-based and Human Visual-based Image Quality Assessment Metric” by Christophe Charrier, AdbelHakim Saadane, Christine Fernandez-Maloigne
17:00   18:00 Invited Speaker: Alain Tremeau, University Jean Monnet, FR
Title: “Toward scene understanding: color perception versus 3D computer vision”
18:00   20:00 GIRPR Meeting
20:30 Gala Dinner at Palazzo Biscari

Friday 15 September

09:00   10:00 Invited Speaker: Fernando Peréz-Gonzalez, University of Vigo, ES
Title “Backstabbing Image Forensics”
10:00   11:00 Oral Session 6: Forensics (Chair Gian Luca Foresti)
“Identity documents classification as an image classification problem”” by Ronan Sicre, Montaser Awal, Teddy Furon
“Using LDP-TOP in Video-Based Spoofing Detection”” by Quoc-Tin Phan, Duc-Tien Dang-Nguyen, Giulia Boato, Francesco De Natale
“PRNU-based forgery localization in a blind scenario”” by Davide Cozzolino, Francesco Marra, Giovanni Poggi, Carlo Sansone, Luisa Verdoliva
11:00   11:30 Coffee Break
11:30   12:50 Oral Session 7: Automotive (Chair Marco La Cascia)
Learning to Map Vehicles into Bird’s Eye View” by Andrea Palazzi, Guido Borghi, Davide Abati, Simone Calderara, Rita Cucchiara
Semi-Automatic Training of a Vehicle Make and Model Recognition System Abstract”” by  Matthijs Zwemer, Guido Brouwers, Rob Wijnhoven, Peter de With
“Analysis of the Discriminative Generalized Hough Transform for Pedestrian Detection”” by Eric Gabriel, Hauke Schramm, Carsten Meyer
Dynamic 3D Scene Reconstruction and Enhancement” by Cansen Jiang, Yohan Fougerolle, David Fofi, Cedric Demonceaux
12:50   15:00 Lunch
15:00   17:00 Interactive Session 3 & Coffee Break:
“Enhanced Bags of Visual Words Representation Using Spatial Information” by Lotfi Abdi, Rahma Kalboussi, Aref Meddeb
“Product Recognition in Store Shelves As a Sub-Graph Isomorphism Problem” by Alessio Tonioni, Luigi Di Stefano
“A Proposal of Objective Evaluation Measures Based on Eye-Contact and Face to Face Conversation for Videophone” by Keiko Masuda, Ryuhei Hishiki, Seiichiro Hangai
“Segmentation of green areas using bivariate histograms based in Hue-Saturation type color spaces” by Luis Morales-Hernandez, Gilberto Alvarado-Robles, Ivan Terol-Villalobos, Marco Garduño-Ramon
“Demographic Classification Using Skin RGB Albedo Image Analysis” by Wei Chen, Miguel Viana, Mohsen Ardabilian, Abdelmalek Zine
“Deep Passenger State Monitoring using Viewpoint Warping” by Ian Tu, Abhir Bhalerao, Nathan Griffiths, Mauricio Muñoz, Thomas Popham, Alex Mouzakitis
“Computer Aided Diagnosis of Pleural Effusion in Tuberculosis Chest Radiographs” by Utkarsh Sharma, Brejesh Lall
“Tampering detection and localization in images from social networks: A CBIR approach” by Cédric Maigrot, Ewa Kijak, Ronan Sicre, Vincent Claveau
“Exploiting Visual Saliency Algorithms for Object-Based Attention: a New Color and Scale-Based Approach” by Edoardo Ardizzone, Alessandro Bruno, Francesco Gugliuzza
“Design of a Classification Strategy for Light Microscopy Images of the Human Liver” by Luigi Cinque, A. De Santis, P. Di Giamberardino, D. Iacoviello, Giuseppe Placidi, Matteo Spezialetti, Antonella Vetuschi, Simona Pompili, Roberta Sferra
“Multi-branch CNN for multi-scale age estimation” by Marco Del Coco, Pierluigi Carcagni, Marco Leo, Paolo Spagnolo, Pier Luigi Mazzeo, Cosimo Distante
“Food Recognition using Fusion of Classifiers based on CNNs” by Eduardo Aguilar, Marc Bolaños, Petia Radeva
“Object Detection for Crime Scene Evidence Analysis using Deep Learning” by Surajit Saikia, Eduardo Fidalgo, Enrique Alegre, Laura Fernandez-Robles
“Gender and Expression Analysis Based on Semantic Face Segmentation” by Pierangelo Migliorati, Khalil Khan, Riccardo Leonardi, Massimo Mauro
“Perceptual-based Color Quantization” by Giuliana Ramella, Vittoria Bruni, Domenico Vitulano
“Towards automatic skin tone classification in facial images” by Diana Borza, Sergiu Nistor, Adrian Darabant
“Retinal Vessel Segmentation through Denoising and Mathematical Morphology” by Benedetta Savelli, Agnese Marchesi, Alessandro Bria, Claudio Marrocco, Mario Molinara, Francesco Tortorella
“Real-Time Incremental and Geo-Referenced Mosaicking by Small-Scale UAVs” by Danilo Avola, Gian Luca Foresti, Niki Martinel, Christian Micheloni, Daniele Pannone, Claudio Piciarelli
“Spatial Enhancement by Dehazing for Detection of Microcalcifications” by Alessandro Bria, Claudio Marrocco, Adrian Galdran, Aurélio Campilho, Agnese Marchesi, Jan-Jurre Mordang, Nico Karssemeijer, Mario Molinara, Francesco Tortorella
“Historical Handwritten Text Images Word Spotting through Sliding Window HOG Features” by Federico Bolelli, Guido Borghi, Costantino Grana
“Towards Detecting High-Uptake Lesions from Lung CT Scans Using Deep Learning” by Krzysztof Pawełczyk, Michal Kawulok, Jakub Nalepa, Michael Hayball, Sarah McQuaid, Vineet Prakash, Balaji Ganeshan
“Mine detection based on adaboost and polynomial image decomposition” by Redouane El Moubtahij, Djamal Merad, Jean-luc Damoiseaux, Pierre Drap
“Automatic Detection of Subretinal Fluid and Cyst in Retinal Images” by Melinda Katona, Attila Kovács, Rózsa Dégi, László G. Nyúl
“Embedded Real-time Visual Search with Visual Distance Estimation” by Marco Paracchini, Emanuele Plebani, Mehdi Ben Iche, Danilo Pau, Marco Marcon
“3D Face Recognition in Continuous Spaces” by Francisco Josè Silva Mata, Elaine Grenot Castellanos, Alfredo Munoz Briseno, Isneri Talavera Bustamante, Stefano Berretti
“Face Recognition with Single Training Sample per Subject” by Taher Khadhraoui, Hamid Amiri
“Two More Strategies to Speed Up Connected Components Labeling Algorithms” by Federico Bolelli, Michele Cancilla, Costantino Grana
“A Convexity Measure for Gray-Scale Images Based on hv-Convexity” by Peter Bodnar, László G. Nyúl, Peter Balazs
“A Computer Vision System for the Automatic Inventory of a Cooler” by Marco Fiorucci, Marco Fratton, Tinsae Dulecha, Marcello Pelillo, Alberto Pravato, Alessandro Roncato
“Improving face recognition in low quality video sequences: single frame vs multi-frame super-resolution” by Andrea Apicella, Francesco Isgrò, Daniel Riccio
“Performance Evaluation of Multiscale Covariance Descriptor in Underwater Object Detection” by Farah Rekik, Walid Ayedi, Mohamed Jallouli
“A lightweight Mamdani Fuzzy Controller for noise removal on iris images” by Andrea Abate, Silvio Barra, Gianni Fenu, Michele Nappi, Fabio Narducci
“Incremental Support Vector Machine on Fingerprint Presentation Attack Detection updating” by Pierluigi Tuveri, Gian Luca Marcialis, Mikel Zurutuza
“Exploiting spatial context in nonlinear mapping of hyperspectral image data” by Evgeny Myasnikov
“Bubble Shape Identification and Calculation in Gas-Liquid Slug Flow Using Semi-Automatic Image Segmentation” by Mauren Andrade, Lucia Valeria Arruda, Eduardo Dos Santos, Daniel Pipa
“MR Brain Tissue Segmentation based on Clustering Techniques and Neural Network” by Hayat Al-Dmour, Ahmed Al-Ani
“A Classification Engine for Image Ballistics of Social Data” by Oliver Giudice, Sebastiano Battiato, Antonino Paratore, Marco Moltisanti
17:00   18:00 ICIAP Awards & Farewell Greetings

Special Sessions

Title Imaging Solutions for Improving the Quality of Life (I-LIFE’17)
Organizers Dan Popescu, Loretta Ichim
Description The session aims to underline the connection between complex image processing and the increasing the quality of life. This is an important challenge of the modern life, which needs interdisciplinary knowledge and effectively solves many problems encountered from different domains: computer science, medicine, biology, psychology, social policy, agriculture, food and nutrition, etc. This special session at the 19th International Conference on Image Analysis and Processing (ICIAP2017) provides a forum for researchers and practitioners to present and discuss advances in the research, development and applications of intelligent systems for complex image processing and interpretation for the increasing quality of life of the persons with disabilities, assisted persons or by detecting and diagnosing the possible diseases of normal persons.
The use of innovative techniques and algorithms in applications like image processing and interpretation for human behavior analysis and medical diagnosis leads to the increasing of life expectancy, wellbeing, independency of people with disabilities and to the improvement of ambient/ active assisted living (AAL) services. For example: the image interpretation for earlier detection of the chronic depression can help to prevent severe diseases; the patient-centric radiation oncology imaging provides a more efficient and personalized cancer care; new methods for the visually impaired (transform visual information into alternative sensory information, or maximizing the residual vision through magnification); eye vasculature and diseases analysis based on image processing software; medical robots controlled by images and so on. Others factors that influences the quality of life refer to food analysis and pollution preventing. So, computer vision exceeds the human ability in: real time inspection of food quality (outside visible spectrum and long term continuous operation); food sorting and defect detection based on color, texture, size and shape; chemical analysis through hyperspectral or multispectral imaging; image processing in agriculture (robotics, chemical analysis, detecting pests, etc.). Also, the quality of life can be determined by: air pollution detection (dust particles detection from ground and remote images, air density pollutants); waste detection and management based on interpretation of aerial images. In the case of disasters like flood, earthquake, fire, radiation, the image interpretation from different sources (ground, air and space) can be successfully used for improving and saving the life (prevention, monitoring and rescue).
The included topics are the following (but not limited): Criteria for efficient feature selection depending on application; Image processing from multi-sources based on neural networks; Medical diagnosis based on complex image processing; New approaches for gesture recognition and interpretation; Assistive technologies based on image processing; Understanding of indoor complexity for persons with disabilities; Ambient monitoring based on image processing; Image processing for quality inspection in food industry; Image processing for the precision and eco agriculture; Image processing for flooding prevention and evaluation.

Sponsors

girpr ictlab iplab
ST new logo (February 2012) micron  springer

 

Endorsers

 IAPRLogoPMS2597

Organization

General Chairs

Sebastiano Battiato – University of Catania, Italy

Giovanni Gallo – University of Catania, Italy

Program Chairs

Raimondo Schettini, University of Milano-Bicocca, Italy

Filippo Stanco, University of Catania, Italy

Workshop Chairs

Giovanni Maria Farinella, University of Catania, Italy

Marco Leo, ISASI- CNR Lecce, Italy

Tutorial Chairs

Gian Luca Marcialis, University of Cagliari, Italy

Giovanni Puglisi, University of Cagliari, Italy

Special Session Chair

Carlo Sansone, University of Naples Federico II, Italy

Cesare Valenti, University of Palermo, Italy

Industrial and Demo Chairs

Cosimo Distante, ISASI – CNR Lecce, Italy

Michele Nappi, University of Salerno, Italy

Publicity Chairs

Antonino Furnari, University of Catania, Italy

Orazio Gambino, University of Palermo, Italy

Video Proceedings Chair

Concetto Spampinato, University of Catania, Italy

US Liaison Chair

Francisco Imai, Canon US Inc, United States

Asia Liaison Chair

Lei Zhang, The Polytechnic University, Hong Kong

 

Steering Committee

Virginio Cantoni, University of Pavia, Italy

Luigi Pietro Cordella, University of Napoli Federico II, Italy

Rita Cucchiara, University of Modena and Reggio Emilia, Italy

Alberto Del Bimbo, University of Firenze, Italy

Marco Ferretti, University of Pavia, Italy

Fabio Roli, University of Cagliari, Italy

Gabriella Sanniti di Baja, ICAR-CNR, Italy

Area Chairs

Video Analysis and Understanding

François Brémond, INRIA, France

Andrea Cavallaro, Queen Mary University of London, England

Pattern Recognition and Machine Learning

Dima Damen, University of Bristol, England

Vittorio Murino, Italian Institute of Technology (IIT), Italy

Multiview Geometry and 3D Computer Vision

Andrea Fusiello, DPIA – Università degli Studi di Udine, Italy

David Fofi, University of Burgundy, France

Image Analysis, Detection and Recognition

Edoardo Ardizzone, University of Palermo, Italy

  1. Emre Celebi, University of Central Arkansas, United States

Multimedia

Costantino Grana, University of Modena and Reggio Emilia, Italy

Biomedical and Assistive Technology

Domenico Tegolo, University of Palermo, Italy

Sotirios Tsaftaris, University of Edinburgh, Scotland

Information Forensics and Security

Stefano Tubaro, Polytechnic University of Milan, Italy

Zeno Geradts, University of Amsterdam, Netherlands

Imaging for Cultural Heritage and Archaeology

Matteo Dellepiane, ISTI-CNR, Italy

Herbert Maschner, CVAST – University of South Florida, United States