NCCV 2016

Qualcomm sponsored

the Netherlands Conference on Computer Vision

ASCI sponsored

We welcome everybody interested in Computer Vision

NCCV is the premium scientific Computer Vision conference in The Netherlands. NCCV has a lunch-to-lunch schedule with overnight stay in order to foster interaction between students, staff, and practitioners both during the scientific program and in the evening.

Previous editions: NCCV 2014, NCCV 2015

Content

We solicit submissions for oral, poster or demo presentations

Topics include, but are not limited to:

Early and Biologically ­Inspired Vision Color and Texture Segmentation and Grouping Computational Photography and Video Motion and Tracking Shape-­from-­X Stereo and Structure from Motion Image­-Based Modeling Illumination and Reflectance Modeling Shape Representation and Matching Object Detection, Recognition, and Categorization Deep learning Video Analysis and Event Recognition Face and Gesture Analysis Statistical Methods and Learning Performance Evaluation Medical Image Analysis Image and Video Retrieval Vision for Graphics Vision for Robotics Vision for Internet Applications of Computer Vision

This list of topics is non-exhaustive. In case of any doubt about whether your topic is appropriate for presentation at the conference, please contact the program chairs.

Monday

When What Who
12.00-13.15Lunch
13.15-13.30OpeningOrganizers
13.30-13.45Spot On: Action Localization from Pointly-Supervised ProposalsMettes et al.
13.45-14.00Objects2action: Classifying and localizing actions without any video exampleJain et al.
14.00-14.15Video Stream Retrieval of Unseen Queries using Semantic MemoryCapallo et al.
14.15-14.30VideoLSTM Convolves, Attends and Flows for Action RecognitionLi et al.
14.30-14.4010 minute break
14.45-15.45Keynote talk: Towards 3D Visual Scene Understanding Bernt Schiele
15.45-15.5510 minute break
16.00-16.15One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural NetworkVukotic et al.
16.15-16.303D Face Tracking for Infant Monitoring Using Dense HOG and Drift ReductionSaeijs et al.
16.30-16.45Who's that Actor? Automatic Labelling of Actors in TV series starting from IMDB ImagesChakravarty et al.
16.45-17.00Self-Supervised Online Training of FCNs for Free-Space DetectionSanberg et al.
17.00-17.1010 minute break
17.10-18.30Posters
18.30-20:30Dinner
20.30-...Social program

Tuesday

When What Who
08.00-08:55Breakfast
09.00-09.15Siamese Instance Search for TrackingTao et al.
09.15-09.30Extent Pooling for Weakly Supervised Object LocalizationGudi et al.
09.30-09.45Improving Semantic Video Segmentation by Dynamic Scene IntegrationZanjani et al.
09.45-09.5510 minute break
10.00-11.00Keynote talk: Beyond Traditional Mean Field Pascal Fua
11.00-11.1010 minute break
11.15-11.30Evaluation and comparison of computer vision methods for early Barrett's cancer detection using volumetric laser endomicroscopyvan der Sommen et al.
11.30-11.45Structured Receptive Fields in CNNsJacobsen et al.
11.45-12.00Multimodal Popularity Prediction of Brand-related Social Media PostsMazloom et al.
12.00-13:00Lunch

Posters

What Who
Extent Pooling for Weakly Supervised Object Localization Gudi et al.
3D Face Tracking for Infant Monitoring Using Dense HOG and Drift Reduction Saeijs et al.
Spot On: Action Localization from Pointly-Supervised Proposals Mettes et al.
Video Stream Retrieval of Unseen Queries using Semantic Memory Cappallo et al.
Making a Case for Learning Motion Representations with Phase Pintea et al.
Pooling Objects for Recognizing Scenes without Examples Kordumova et al.
Objects2action: Classifying and localizing actions without any video example Jain et al.
Siamese Instance Search for Tracking Tao et al.
The ImageNet Shuffle: Reorganized Pre-training for Video Event Detection Mettes et al.
Human Pose Estimation in Space and Time using 3D CNN Grinciunaite et al.
Structured Receptive Fields in CNNs Jacobsen et al.
Deep Learning and Support Vector Machine for Effective Plant Identification Strezoski et al.
Evaluation and comparison of computer vision methods for early Barrett's cancer detection using volumetric laser endomicroscopy van der Sommen et al.
Self-Supervised Online Training of FCNs for Free-Space Detection Sanberg et al.
Who's that Actor? Automatic Labelling of Actors in TV series starting from IMDB Images Chakravarty et al.
One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural Network Vukotic et al.
Depth-aware Motion Magnification Kooij et al.
Point Light Source Position Estimation from RGB-D Images by Learning Surface Attributes Karaoglu et al.
Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets for Future Tasks Gavves et al.
Online Action Detection de Geest et al.
Multimodal Popularity Prediction of Brand-related Social Media Posts Mazloom et al.
VideoLSTM Convolves, Attends and Flows for Action Recognition Li et al.
Multivariate Time Series Classification using the Hidden-Unit Logistic Model Pei et al.
Monitoring Dementia with Automatic Eye Movements Analysis Zhang et al.
Towards Robust Water Region Extraction For Maritime Surveillance Ghahremani et al.

Keynote

Prof. Dr. Bernt Schiele

Max-Planck-Institut für Informatik (MPI)

Personal Website; Profile on Google scholar

Bernt Schiele

Title: Towards 3D Visual Scene Understanding

Abstract: Inspired by the ability of humans to interpret and understand 3D scenes nearly effortlessly, the problem of 3D scene understanding has long been advocated as the "holy grail" of computer vision. In the early days this problem was addressed in a bottom-up fashion without enabling satisfactory or reliable results for scenes of realistic complexity. In recent years there has been considerable progress on many sub-problems of the overall 3D scene understanding problem. As the performance for these sub-tasks starts to achieve remarkable performance levels we argue that the problem to automatically infer and understand 3D scenes should be addressed again. This talk highlights recent progress on some essential components such as object recognition, person detection and human pose estimation, as well as on our work towards describing video content with natural language. These efforts are part of a longer-term agenda towards 3D visual scene understanding.

Keynote

Prof. Dr. Pascal Fua

École polytechnique fédérale de Lausanne (EFPL)

Personal Website; Profile on Google scholar

Pascal Fua

Beyond Traditional Mean Field

Mean Field (MF) inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, practical MF approximation schemes usually rely on ad hoc damping schemes to guarantee convergence or put strong constraints on the type of models that can be used. Furthermore, as MF models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. To solve the first problem, we propose a novel proximal gradient-based approach to optimizing the variational objective. It is naturally parallelizable and easy to implement. To address the second, we replace the fully factorized MF distribution by a weighted mixture of such distributions and introduce an effective approximation algorithm to fit this more complex model. We will demonstrate that this positively impacts people-tracking and segmentation algorithms that rely on MF.

Important dates

Submission deadline: Nov 25, 2016

Decision: Dec 2, 2016

Camera ready: Dec 7, 2016

Conference: 12-13 Dec, 2016

Submission

The conference solicits two types of contributions:

Track A: Type A contributions present new unpublished work in a paper of 4-8 pages.
The papers will be peer-reviewed by the program committee.
In case of acceptance, NCCV does not claim copyright of the paper, so you are free to re-submit it elsewhere after NCCV.

Track B: Type B contributions are abstracts (2 pages max.) of work that was previously published
in another peer-reviewed computer-vision venue (CVPR, ECCV, ICCV, IJCV, IEEE TPAMI, IEEE TIP, IEEE TMI, MIA, IPMI, MICCAI etc.).
Abstracts are used to determine the final program of the conference.

To ensure a high quality of the conference and to reduce reviewer load, we particularly encourage the submission of type B papers.

All contributions will be published on the NCCV website, but no copyright will be claimed.
The program committee will decide on which (A and B) contributions to selected for an oral presentation, and which ones for a poster presentation.

Submission Instructions

Papers and abstracts can be submitted via CMT. Style files and further submission instructions are available here.

Location

Conference Centre De Werelt in Lunteren, The Netherlands.

The conference centre is located on the Paul Westhofflaan 2 in Lunteren, The Netherlands. The conference venue is located in a splendid natural environment.

Transportation

How to get there

Follow the following directions.

Provided shuttle

Lunteren train station to Conference Centre De Werelt: Monday between 11:00 and 12:15
Conference Centre De Werelt to Lunteren train station: Tuesday between 12:45 and 13:15

From the train station in Lunteren, you can also walk to De Werelt along the paved road or via a forest path.
Walking from Lunteren station to De Werelt takes approximately 15 minutes.

Registration

Full registration packages includes two lunches, a borrel, dinner, overnight stay, and breakfast.

At least one author of each accepted paper is required to register for the conference.

The total number of participants is limited because of space constraints, so please register early!

Type Cost
ASCI Full Registration €70
Academic Full Registration €150
Industry Full Registration €250
Academic Day pass: 12 Dec (+lunch, borrel, dinner) €100
Industry Day pass: 12 Dec (+lunch, borrel, dinner) €180
Academic Day pass: 13 Dec (+lunch) €80
Industry Day pass: 13 Dec (+lunch) €100

Please use this link to register.

People

Chairs: Jan van Gemert and Efstratios Gavves

Program Committee:

Hamdi Dibeklioglu, Delft University of Technology Thomas Mensink, University of Amsterdam Ronald Poppe, Utrecht University Theo Gevers, University of Amsterdam Gertjan Burghouts, TNO, NL Gwenn Englebienne, University of Twente, NL Peter De With, Technical University of Eindhoven, NL Emile Hendriks, Delft University of Technology, NL Hayley Hung, Delft University of Technology, NL Bouke Huurnink, The Netherlands Institute for Sound and Vision, NL Inald Lagendijk, Delft University of Technology, NL Marco Loog, Delft University of Technology, NL Wiro Niessen, Erasmus University Medical Center, NL Johan Oomen, The Netherlands Institute for Sound and Vision, NL Eric Postma, Tilburg University, NL Marcel Reinders, Delft University of Technology, NL Hichem Sahli, Vrije Universiteit Brussel, BE Albert Ali Salah, Bogazici University, TR Klamer Schutte, TNO, NL Nicu Sebe, University of Trento, IT Jan Sijbers, University of Antwerp, BE Arnold Smeulders, University of Amsterdam, NL Cees Snoek, Qualcomm / University of Amsterdam, NL Robby Tan, National University of Singapore, SG Emrah Tasli, Booking.com, NL David Tax, Delft University of Technology, NL Tinne Tuytelaars, Catholic University of Leuven, BE Jasper Uijlings, University of Edinburgh, UK Roberto Valenti, SightCorp, NL Michel Valstar, University of Nottingham, UK Joost van de Weijer, Universitat Autònoma de Barcelona, ES Laurens van der Maaten, Facebook, US Raymond Veldhuis, University of Twente, NL Remco Veltkamp, Utrecht University, NL Max Welling, University of Amsterdam, NL Marco Wiering, University of Groningen, NL Marcel Worring, University of Amsterdam, NL Zoran Zivkovic, Intel, NL