Main Conference

Coming soon…

Tutorials

Title Quantitative imaging in monitoring response to treatment: Challenges and opportunities
Speaker Habib Zaidi, Ph.D
Abstract This talk reflects the tremendous increase in interest in molecular and dual-modality imaging (PET/CT, SPECT/CT and PET/MRI) as both clinical and research imaging modalities in the past decade. An overview of molecular mutli-modality medical imaging instrumentation as well as simulation, reconstruction, quantification and related image processing issues with special emphasis on quantitative analysis of nuclear medical images are presented. This tutorial aims to bring the biomedical image processing community a review on the state-of-the-art algorithms used and under development for accurate quantitative analysis in multimodality and multiparametric molecular imaging and their validation mainly from the developer’s perspective with emphasis on image reconstruction and analysis techniques. It will inform the audience about a series of advanced development recently carried out at the PET instrumentation & Neuroimaging Lab of Geneva University Hospital and other active research groups. Current and prospective future applications of quantitative molecular imaging are also addressed especially its use prior to therapy for dose distribution modelling and optimisation of treatment volumes in external radiation therapy and patient-specific 3D dosimetry in targeted therapy towards the concept of image-guided radiation therapy.

 

Title Virtual Cell Imaging (methods and principles)
Speaker David Svoboda
Abstract The interdisciplinary research connecting the pure image processing and pure biology/medicine brings many  challenging tasks. The tasks are highly practically oriented and their solution have a direct impact on the development   of some disease treatments or drugs development, for example. This talk aims at those students/researchers who plan joining some application-oriented research groups, where the segmentation or tracking methods for the proper analysis of fixed of living cells are developed or utilized. The attendees of this tutorial will be not only able to know and use the commonly available simulation toolkits or the benchmark image data produced by these toolkit to verify the accuracy of the inspected image analysis method. They will also understand the principles of these simulation frameworks and will be able to design and implement their own toolkits hand-tailored to their private data.

Workshops

Title First International Workshop on Brain-Inspired Computer Vision (WBICV2017)
Organizers  George AzzopardiLaura Fernández-Robles, Antonio Rodríguez-Sánchez
Web page  http://wbicv2017.ai.edu.mt/
Description  The visual perception of a human is a complex process performed by various elements of the visual system of the brain. This remarkable unit of the brain has been used as a source of inspiration for developing algorithms that can be used in computer vision tasks such as finding objects, analysing motion, identifying or detecting instances, reconstructing scenes or restoring images. One of the most challenging goals in computer vision is, therefore, to design and develop algorithms that can process visual information as humans do.

The main aim of WBICV2017 is to bring together researchers from the diverse fields of computer science (pattern recognition, machine learning, artificial intelligence, high performance computing and visualisation) along with the fields of visual perception and visual psychophysics who aim to model different phenomena of the visual system of the brain. We look forward to discussing the current and next generation of brain system modelling for a wide range of vision related applications. This workshop aims to comprise powerful, innovative and modern image analysis algorithms and tools inspired by the function and biology of the visual system of the brain.

The researchers will present their latest progress and discuss novel ideas in the field. Besides the technologies used, emphasis will be given to the precise problem definition, the available benchmark databases, the need of evaluation protocols and procedures in the context of brain-inspired computer vision methods and applications.

Papers are solicited in, but not limited to, the following TOPICS:

  • Mathematical models of visual perception
  • Brain-inspired algorithms
  • Learning: Deep learning, recurrent networks, differentiable neural computers, sparse coding.
  • The appearance of neuronal properties: sparsity and selectivity
  • Circuitry: hierarchical representations and connections between layers.
  • Selecting where to look: saliency, attention and active vision.
  • Hierarchy of visual cortex areas
  • Feedforward, feedback and inhibitory mechanisms
  • Applications: object recognition, object tracking, medical image analysis, contour detection and segmentation

 

Title Third International Workshop on Multimedia Assisted Dietary Management (MADiMa 2017)
Organizers  Stavroula MougiakakouGiovanni Maria FarinellaKeiji Yanai
Web page www.madima.org/
Description The prevention of onset and progression of diet-related acute and chronic diseases (e.g. diabetes, obesity, cardiovascular diseases and cancer) requires reliable and intuitive dietary management. The need for accurate, automatic, real-time and personalized dietary advice has been recently complemented by the advances in computer vision and smartphone technologies, permitting the development of the first mobile food multimedia content analysis applications. The proposed solutions rely on the analysis of multimedia content captured by wearable sensors, smartphone cameras, barcode scanners, RFID readers and IR sensors, along with already established nutritional databases and often require some user input. In the field of nutritional management, multimedia not only bridges diverse information and communication technologies, but also computer science with medicine, nutrition and dietetics. This confluence brings new challenges and opportunities on dietary management.

MADiMa2017 aims to bring together researchers from the diverse fields of engineering, computer science and nutrition who investigate the use of information and communication technologies for better monitoring and management of food intake. The combined use of multimedia, machine learning algorithms, ubiquitous computing and mobile technologies permit the development of applications and systems able to monitor the dietary behavior, analyze food intake, identify eating patterns and provide feedback to the user towards healthier nutrition. The researchers will present their latest progress and discuss novel ideas in the field. Besides the technologies used, emphasis will be given to the precise problem definition, the available nutritional databases, the need for benchmarking multimedia databases of packed and unpacked food and the evaluation protocols.

Topics of interest include (but are not limited to) the following:

  • Ubiquitous and mobile computing for dietary assessment
  • Computer vision for food detection, segmentation and recognition
  • 3D reconstruction for food portion estimation
  • Augmented reality for food portion estimation
  • Wearable sensors for food intake detection
  • Computerized food composition (nutrients, allergens) analysis
  • Multimedia technologies for eating monitoring
  • Smartphone technologies for dietary behavioral patterns
  • Deep Learning for food analysis
  • Food Images and Social Media
  • Food multimedia databases
  • Evaluation protocols of dietary management systems
  • Multimedia assisted self-management of health and disease

 

Title Social Signal Processing and Beyond (SSPandBE 2017)
Organizers  Mariella DimiccoliPetia Ivanova RadevaMarco Cristani
Web page (coming soon)
Description Social Signal Processing is the domain aimed at studying social behavior, in particular its nonverbal aspects, comprehensively covering the aspects of analysis, synthesis and modeling.

To date, the field has focused on face-to-face interactions where it is possible to use the whole range of nonverbal cues that people utilizes to communicate. This scenario triggered several advancements in computer vision, in order to capture subtle signals coming from gestures, facial expressions, vocalizations, and other explicit and implicit communication means, so as novel machine learning and pattern recognition strategies to embed those signals into behavioral models.

However, increasingly more interactions take place in unconstrained scenarios. In particular, groups and crowd, that have been the focus of the many surveillance studies so far, can be analyzed following a social signal processing direction. This amounts to exploit proxemic and dynamic cues, which could indicate group/crowd membership, intention to aggregate in/leave a group and the tendency of being social/antisocial or suspect, bringing new instrument to the surveillance field for example.

A particular kind of unconstrained scenario are everyday interactions captured by wearable cameras. Specially when worn on the head, a wearable camera is naturally suited to capture social interactions from the point of view of a person who is actually involved in it and typically moves to secure the best view of what he/she is interested in. These advantages can be exploited to learn social perceptual behavior of both the wearer and other interacting people, and in turn use them to build socially intelligent agents and social assistive technologies.

Another novel type of scenario is that of the analysis of how the environment may trigger particular types of behaviors and facilitate or impede social interaction. For example,  the analysis of the type   of architecture (the height of the ceiling, the geometry of the room, the material of the room) is a  brand new territory that more and more researchers are starting to account for. Another connected theme is the use of the illumination as a crucial pattern toward social interaction: a simple example is that of the public luminaries, which could dramatically drift the behavior of the user into totally different behaviors (social/anti social). This is the core of the SCENEUNDERLIGHT H2020 project (MSCA-ITN-EID – European Industrial Doctorates), which shows how a such advanced research topic is of interest for one of the biggest companies of intelligent lighting, that is, OSRAM. The workshop will be co- sponsored by the SCENEUNDERLIGHT project.

But social signals are not only present in “real” social situation, but also in “virtual” settings, i.e., Internet. Nowadays, people can interact with virtually anybody at virtually every moment: According to the latest statistics (http://wearesocial.net/tag/sdmw/), Internet users worldwide are now three billions (roughly 40% of the population), for a total of over two billions of active social media accounts (29% of the world population). These social interactions take place through communication technologies that limit the use of nonverbal cues (e.g., usually in a videoconference it is allowed to display facial cues only) or require the adoption of cues in a way that do not belong to the natural repertoire of human exchange (e.g., in a videoconference I may see my face when I talking). This opens a new frontier for Social Signal Processing where the main questions are whether people still exchange social signals and, if so,  what  image analysis and pattern Recognition technologies are effective in this novel domain.

For these reasons, SSPandBE 2017 is ideally suited for a top tier prime computer vision conference such as ICIAP 2017. In addition, being the sole international workshop dedicated to the subject, it will offer a unique opportunity for researchers to discuss the problem from diverse perspectives and report innovative concepts and solutions.

 

Title Natural human-computer interaction and ecological perception in immersive virtual and augmented reality (NIVAR2017)
Organizers Manuela ChessaFabio Solari, Jean-Pierre Bresciani
Web page nivar2017.wordpress.com
Description Given the recent spread of technologies, devices, systems and models for immersive virtual reality (VR) and augmented reality (AR), which are now effectively employed in various field of applications, an emerging issue is addressing how interaction occurs in such systems. In particular, a key problem is the one of achieving a natural and ecological interaction with the devices typically used for immersive VR and for AR, i.e. interacting with them by using the same strategies and eliciting the same perceptual responses as it occurs when interacting in the real world. This is particularly important when VR and AR systems are used in assistive contexts, e.g. targeting elderly or disable people, or for cognitive and physical rehabilitation, but also to prevent and mitigate visual fatigue and cybersickness when targeting healthy people.
The main scope of this workshop is to put together researchers and practitioners from both Academy and Industry, interested in studying and developing innovative solutions with the aim of achieving a Natural human-computer interaction and an ecological perception in VR and AR systems.Technical topics of interest include (but are not limited to):

  • Natural human-computer interaction in virtual/augmented/mixed reality environments.
  • Ecological validity of virtual/augmented/mixed reality systems and/or human-computer interaction.
  • Hand/ face/body recognition and tracking for human-computer interaction.
  • Action and activity recognition for human-computer interaction.
  • Vision neuroscience for human-computer-interaction.
  • Eye-tracking for human-computer interaction.
  • Computational vision models.
  • Depth (from stereo and/or other cues) and motion (also self-motion) perception in virtual/augmented/mixed reality environments.
  • Rendering in virtual/augmented/mixed reality environments.
  • Misperception issues and undesired effects in visualization devices (e.g., 3D displays, head-mounted displays)
  • Applications based on displays (also S3D), smartphones, tablets, head-mounted displays.

 

Title Automatic affect analysis and synthesis
Organizers Nadia Berthouze, Simone Bianco, Giuseppe Boccignone, Paolo Napoletano
Web page  http://www.ivl.disco.unimib.it/w3as/
Description Affective computing is a research field that tries to endow machines with capabilities to recognize, interpret and express emotions. On the one hand, the ability to automatically deal with human emotions is crucial in many  human computer interaction  applications. On the other hand, people express affects through a complex  series of actions relating to facial expression, body movements, gestures, voice prosody accompanied by a variety of physiological signals, such as heart rate and sweat, etc.

Thus,  goals set by affective computing involve a number of challenging issues on how systems should be conceived  built, validated, and compared.

In this perspective, we are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:

  • Facial expression analysis and synthesis;
  • Body gesture and movement recognition;
  • Emotional speech processing;
  • Heart rate monitoring from videos;
  • Emotion analysis from physiological signs;
  • Multimodal affective computing;
  • Affect understanding and synthesis.
  • Computational Visual Aesthetics;
  • Recognition of group emotion;
  • Tools and methods of annotation for provision of emotional corpora;
  • Affective Applications: medical, assistive; virtual reality; entertainment; ambient intelligence, multimodal interfaces;

Selected papers of the workshop will be invited to be extended for a special issue on a leading international journal.

Title International Workshop on Biometrics as-a-service: cloud-based technology, systems and applications.
Organizers Silvio BarraArcangelo CastiglioneKim-Kwang Raymond ChooFabio Narducci
Web page http://www.biplab.unisa.it/iwbaas/
Description Cloud-based Biometrics is a relatively new topic and solutions by emerging companies, e.g., BioID, ImageWare Systems, Animetrics and IriTech, further confirm the expectations of its rapid growing. Biometrics-as-a-service has the same benefits as any other cloud-based service. It is cost-effective, scalable, reliable and hardware agnostic, making enhanced security accessible anytime and anywhere. Moreover, legal and privacy issues vary from country to country, thus limiting the progress of this branch of the research on cloud computing. We therefore expect the contributions could also shed light on such less explored aspects.

Nowadays, the massive spread of cloud-based systems is leading the service providers to offer more advanced access protocols to their own users, which may overcome the limitations and the weaknesses of the traditional alphanumeric passwords. Experts all over the world are pushing for cloud-based biometric systems, which are supposed to be one of the upcoming research frontier of the next years. Biometric credentials are difficult to be stolen and do not need to be remembered, so making them suitable for on-the-move authentication scenarios, typical of the current mobile age. On the other hand, the remote storage of a biometric trait on the cloud is function creep-prone, i.e. the gradual widening of the use of a technology or system beyond the purpose for which it was originally intended. Legal and security issues related to the abuse & misuse of a biometric trait obstruct the rapid and widespread diffusion of such practice.

The objective of IW-BAAS is to capture the latest advances in this research field, soliciting papers and ideas above the cloud based biometric systems and services. Technical, legal, professional and ethical aspects related to the use of biometrics in cloud environments are also encouraged.

Topics of interest include, but are not limited to, the following:

  • Cloud-based Architectures for Biometric Systems;
  • Cloud-based Communication Protocols for Biometric Systems;
  • Biometric Security and Privacy Policy;
  • Ethical, legal, culture and regulation factors;
  • Biometric Storage in the Cloud;
  • Biometric Access Control of Cloud Data;
  • Mobile Biometrics and Cloud Computing;
  • Liveness/Spoofing Detection for Cloud Applications;
  • Biometric Cryptography;
  • Biometric Encryption in Cloud computing;
  • Biometric Fusion in the Cloud;
  • Smart spaces and Ambient Intelligence Environments;
  • Biometric representation suitable for the Cloud

Special Issues on IEEE Cloud Computing (pending) will be devoted to the conference topics and the best selected papers will be considered for publication, as extended versions.

Please note that:

  • papers must have been presented in the conference;
  • papers should have been carefully revised and extended with at least 30% of new original

Special Sessions

Title Imaging Solutions for Improving the Quality of Life (I-LIFE’17)
Organizers Dan Popescu, Loretta Ichim
Description The session aims to underline the connection between complex image processing and the increasing the quality of life. This is an important challenge of the modern life, which needs interdisciplinary knowledge and effectively solves many problems encountered from different domains: computer science, medicine, biology, psychology, social policy, agriculture, food and nutrition, etc. This special session at the 19th International Conference on Image Analysis and Processing (ICIAP2017) provides a forum for researchers and practitioners to present and discuss advances in the research, development and applications of intelligent systems for complex image processing and interpretation for the increasing quality of life of the persons with disabilities, assisted persons or by detecting and diagnosing the possible diseases of normal persons.
The use of innovative techniques and algorithms in applications like image processing and interpretation for human behavior analysis and medical diagnosis leads to the increasing of life expectancy, wellbeing, independency of people with disabilities and to the improvement of ambient/ active assisted living (AAL) services. For example: the image interpretation for earlier detection of the chronic depression can help to prevent severe diseases; the patient-centric radiation oncology imaging provides a more efficient and personalized cancer care; new methods for the visually impaired (transform visual information into alternative sensory information, or maximizing the residual vision through magnification); eye vasculature and diseases analysis based on image processing software; medical robots controlled by images and so on. Others factors that influences the quality of life refer to food analysis and pollution preventing. So, computer vision exceeds the human ability in: real time inspection of food quality (outside visible spectrum and long term continuous operation); food sorting and defect detection based on color, texture, size and shape; chemical analysis through hyperspectral or multispectral imaging; image processing in agriculture (robotics, chemical analysis, detecting pests, etc.). Also, the quality of life can be determined by: air pollution detection (dust particles detection from ground and remote images, air density pollutants); waste detection and management based on interpretation of aerial images. In the case of disasters like flood, earthquake, fire, radiation, the image interpretation from different sources (ground, air and space) can be successfully used for improving and saving the life (prevention, monitoring and rescue).
The included topics are the following (but not limited): Criteria for efficient feature selection depending on application; Image processing from multi-sources based on neural networks; Medical diagnosis based on complex image processing; New approaches for gesture recognition and interpretation; Assistive technologies based on image processing; Understanding of indoor complexity for persons with disabilities; Ambient monitoring based on image processing; Image processing for quality inspection in food industry; Image processing for the precision and eco agriculture; Image processing for flooding prevention and evaluation.