Page tree
Skip to end of metadata
Go to start of metadata

Motivation

Visual place recognition approaches are of fundamental importance to autonomous systems and vehicles such as personal service robots, self-driving cars, and unmanned aerial systems, as they allow these systems to navigate in the world. A major challenge for visual place recognition systems is to achieve robustness to the large variability in scene appearance that can be observed in the real world. Such changes – induced by the time of day, weather or seasonal effects as well as human activity – are a ubiquitous challenge for all autonomous systems aiming at long-term operations in both indoor and outdoor settings. Visual Place Recognition techniques need to be able to understand the scene and detect and react to changes in it, which requires concepts from image-based geo-localization, robotics, machine learning, and visual neuroscience.

In conjunction with the workshops at ICRA and CVPR we organize this place recognition challenge to bring together researchers working in different fields of computer vision, machine learning, robotics and visual neuroscience to test their algorithms for visual place recognition on a challenging dataset and compare results with other participants. We will keep the dataset and the results online after the workshop, to serve as a long-term benchmark comparable to the KITTI dataset. 

News

  • 18th March 2015: Website goes online, dataset is available for download.

Description of the dataset

The dataset consists of two traversals through a variety of environments. We refer to these two traversals as the live and memory dataset. The memory dataset consists of the images the robot observed during its first visit of the environment. It is the reference, the images from the live dataset have to be matched against. Environmental conditions between the memory and live traversals might have changed significantly: The dataset features appearance changes caused by weather and seasonal effects, the day time, and dynamic objects. Another challenge are viewpoint changes, i.e. the robot does not follow exactly the same path on its second (live) traversal, but observes the known scenes from a different angle.

Notice that not all scenes from the live dataset can be matched to a scene from the memory dataset, i.e. there are true negatives in the dataset. Also the order of appearance of places is not the same in the live and memory dataset (the robot takes different routes through the environment), so expect your matches to 'jump'.

In total the dataset consists of 7778 images from a variety of outdoor environments and various viewing conditions. The footage has been recorded from trains, cars, buses, bikes, or pedestrians. All images are of resolution 640x480.

Download the dataset 

You can download the dataset as a zip archive. Extract the archive. You will find two directories (live/ and memory/) with the images. Run your place recognition algorithm and match the images from the live dataset to the ones from the memory dataset. Create a file with your results, following the instructions below.


Submission of results

Results must be submitted as a text file. This file should contain two columns, separated by a space character. The first column is the frame ID of the live images. It must be consecutive, i.e. run from 0 all the way to the number of frames in the live dataset without gaps. The second column contains the ID of an image from the memory dataset you matched the live frame against. Please only provide a single match ID per image. If your algorithm suggests multiple matches, report the best match. Some places in the live dataset are novel, i.e. they have not been seen before. If you do not wish to match a frame from the live dataset against any frame from the memory dataset, put a -1 in the second column.

You are doing it right if: 

  • your file contains two columns of integer numbers, separated by a space
  • the numbers in the first column start at 0 and run consecutively without gaps to the number of frames in the live dataset
  • your second column contains the frame ID of the memory dataset you match against
  • any live frames that cannot be matched to a memory frame have a -1 in the second column

An example file would look like this:

0 105
1 107
2 -1
3 56

Please send the results by email to niko.suenderhauf@qut.edu.au, using the subject line "[VPRiCE Challenge] results $teamname". Put your institution or group instead of $teamname. Please also provide the following information:

  1. Name of your institution and group (put "anonymous" if you do not wish to have your results published online).
  2. A link to a website (if applicable) with further information.
  3. A short description of the algorithm you used. We are especially interested in knowing
    1. if it is a feature / landmark based method or holistic,
    2. if it is based on single-frame matching or using sequences or temporal information,
    3. if it requires learning (e.g. building a vocabulary, training a neural network)
    4. whether or not additional data (i.e. beyond what we provided) was used for tuning, validation or training
  4. Runtime information (e.g. required time to match a single live image against the memory dataset). Please also indicate useful information about the required hardware, e.g. does it run on CPU or GPU?
  5. An indication if you submitted results before and want your old ones to be replaced.
  6. If your results are directly connected to a paper submitted to our workshop.

Important dates

  • Submission of results: Results can be submitted any time. We ask participants to submit their results in conjunction with their workshop papers.
  • Publication of performance scores: We will evaluate your results as quickly as possible and publish them on this website immediately.

Evaluation

We will evaluate the performance using precision, recall and F1 score. We might analyse the results further by calculating separate scores for separate parts of the dataset that exhibit different challenges.

The results will be published on this website. If you do not wish that your name or your institution appears on that site, please let us know.

Precision and Recall

 

Acknowledgements

We want to express our thanks to a number of people for gathering datasets and providing it to the public.

  • Our place recognition challenge datasets features beautiful scenery from Norway, recorded by the Norwegian Television NRK for their show Nordlandsbanen: Minutt for minutt and published under the Creative Commons Licence CC-BY 3.0.
  • Mapillary is a great service that provides crowdsourced street level imagery from all around the world. Our dataset features footage recorded by Mapillary users in Berlin, Malmö, and Hamburg. It was published under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)
  • Arren Glover collected footage from the campus of QUT in Brisbane and in the city. He is now a postdoctoral researcher at IIT in Genova, Italy.