CONTRIBUTIONS → PLENARY TALKS

 

Official Opening Ceremony

Prof. K. Goser

 

Prof. Karl Goser.

Title: IWANN: legacy and challenge.

 

PLENARY TALK: Deep Neural Networks for Visual Pattern Recognition

Dan Ciresan

 

Title: Deep Neural Networks for Visual Pattern Recognition


GPU-optimized Deep Neural Networks (DNNs) excel on visual pattern recognition tasks. They are successfully used for automotive problems like pedestrian and traffic sign detection. DNNs are fast and extremely accurate, making it possible to automatically segment and reconstruct the neuronal connections in large sections of brain tissue for the first time. This will bring a new understanding of how biological brains work. DNNs power automatic navigation of a quadcopter in the forest.



Dan Ciresan
Dan Ciresan received his PhD in Computer Science from Universitatea Politehnica Timisoara, Romania, in 2008. He is a senior researcher at Dalle Molle Institute for Artificial Intelligence (IDSIA), Switzerland. Dr. Ciresan is one of the pioneers of using graphics cards for accelerating Deep Neural Networks (DNNs). His methods have won five international competitions on topics such as classifying traffic signs (2011), recognizing handwritten Chinese characters (2011), segmenting neuronal membranes in electron microscopy images (2012), and detecting mitosis in breast cancer histology images (2012 & 2013). Dr. Ciresan’s DNNs have significantly improved the state of the art on a plethora of image classification, detection and segmentation tasks. Similar neural network architectures are now widely used both in academia and in industry.

 

PLENARY TALK: Self-reconfiguring distributed vision

Andrea Cavallaro

 

Title: Self-reconfiguring distributed vision


Assistive technologies, environmental monitoring, search and rescue operations, security and entertainment applications will considerably benefit from the sensing capabilities offered by emerging networks of wireless cameras. These networks are composed of cameras that may be wearable or mounted on robotic platforms and can autonomously sense, compute, decide and communicate. These cameras and their vision algorithms need to adapt their hardware and algorithmic parameters in response to unknown or dynamic environments and to changes in their task(s), i.e. they need to self-reconfigure. Cooperation among the cameras may lead to adaptive and task-dependent visual coverage of a scene or to increased robustness and accuracy in object localization under varying poses or illumination conditions. In this talk I will cover challenges and current solutions in self-reconfiguring distributed vision using networks of wireless cameras. In particular, I will discuss how cameras may learn to improve their performance. Moreover, I will present recent methods that allow cameras to move and to interact locally forming coalitions adaptively in order to provide coordinated decisions under resource and physical constraints.



Andrea Cavallaro
Andrea Cavallaro is Professor of Multimedia Signal Processing and Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is Area Editor for the IEEE Signal Processing Magazine and Associate Editor for the IEEE Transactions on Image Processing. He is an elected member of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee, and chair of its Awards committee. He served as an elected member of the IEEE Signal Processing Society, Multimedia Signal Processing Technical Committee, as Associate Editor for the IEEE Transactions on Multimedia and the IEEE Transactions on Signal Processing, and as Guest Editor for seven international journals. He was General Chair for IEEE/ACM ICDSC 2009, BMVC 2009, M2SFA2 2008, SSPE 2007, and IEEE AVSS 2007. Prof. Cavallaro was Technical Program chair of IEEE AVSS 2011, the European Signal Processing Conference (EUSIPCO 2008) and of WIAMIS 2010. He has published more than 130 journal and conference papers, one monograph on Video tracking (2011,Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).

 

 

PLENARY TALK: The shared control paradigm for assistive and rehabilitation robots

Cristina Urdiales

 

Title: The shared control paradigm for assistive and rehabilitation robots


Lack of human resources to cope with the increasing population of people with special needs nowadays has led to the use of assistive robots to manage with Activities of Daily Living (ADL). Mobility assistance is particularly necessary for this population sector because it is of extreme importance to remain autonomous. Unfortunately, conventional control paradigms are not fit for most assistive robots, because an excess of help may lead to frustration and loss of residual skills as much as a lack of help may lead to failure in achieving a task. Consequently, many works in the field rely on the share control paradigm, that aims at combination of user and robot commands as seamlessly as possible. This talk focuses on the different approaches to shared control and the importance of personalization and adaptation to each individual to provide the right amount of existance at every distinct situation.



Cristina Urdiales
Cristina Urdiales received her degree in Telecommunication Engineering at UPC and PhD degrees in Electronics and Telecommunication Engineering and Artificial Intelligence at UMA and UPC, respectively. She is actually an associate professor at the department of Electronics Technology at UMA. Her research work mostly focuses on autonomous mobile robots, specifically in assistive robotics. In this field, she has received several national and international awards for her contributions to the shared autonomy paradigm. Ambient Intelligence and Augmented Reality interfaces are also among her interests.