Towers of Babel:

Combining Images, Language, and 3D Geometry for Learning Multimodal Vision

Xiaoshi Wu1* Hadar Averbuch-Elor2* Jin Sun2 Noah Snavely2
*Equal contribution
ICCV 2021
image
image

WikiScenes is a dataset that combines 3D reconstructions, images, and language descriptions for dozens of landmarks (e.g., the Barcelona Cathedral and the Reims Cathedral pictured above.)

This allows to associate semantic concepts such as “portal”, “facade”, and “tower” (hover over the image to see these concepts colored in pink, blue and brown!) with 3D points across the entire category of cathedrals.


Abstract

The abundance and richness of Internet photos of landmarks and cities has led to significant progress in 3D vision over the past two decades, including automated 3D reconstructions of the world's landmarks from tourist photos. However, a major source of information available for these 3D-augmented collections--language, e.g., from image captions--has been virtually untapped. In this work, we present WikiScenes, a new, large-scale dataset of landmark photo collections that contains descriptive text in the form of captions and hierarchical category names. WikiScenes forms a new testbed for multimodal reasoning involving images, text, and 3D geometry. We demonstrate the utility of WikiScenes for learning semantic concepts over images and 3D models. Our weakly-supervised framework connects images, 3D structure and semantics, utilizing the strong constraints provided by 3D geometry, to associate semantic concepts to image pixels and points in 3D space.


The WikiScenes Dataset

Our WikiScenes dataset consists of paired images and language descriptions capturing world landmarks and cultural sites, with associated 3D models and camera poses. WikiScenes is derived from the massive public catalog of freely-licensed crowdsourced data in the Wikimedia Commons project, which contains a large variety of images with captions and other metadata. We extract two forms of textual descriptions for each image: (1) Captions associated with images, describing the image using free-form language, and (2) The WikiCategory hierarchy obtained according to the hierarchy of WikiCategories associated with each image (see the examples in the image below). Overall, WikiScenes contains approximately 63K images with textual descriptions. Our dataset is available here.

dataset

Images paired with hierarchical WikiCategories from the root (top) to the leaf (bottom).

language

Number of available captions

landmarksize

Number of image-caption pairs by landmarks (sorted). The y-axis is plotted on a log scale.


Semantic Reasoning Over Images and 3D Models

WikiScenes can be used to study a range of different problems. In our work, we focus on semantic reasoning over 2D images and 3D models. First, we automatically discover semantic concepts and associate these with images in WikiScenes. We then show how these image pseudo-labels can provide a supervision signal for learning semantic feature representations over the entire category of landmarks.

Click here to see samples from our training set!

Overview of our Contrastive Learning Framework

overview

Given an image pair with shared keypoints, we jointly train our model to classify the images into one of the C concepts from the learned score maps and to output a higher similarity for pixels mapping to the same point in 3D space (in blue). Negative pairs are constructed by sampling non-corresponding points from other images in the same batch.


Results

Below we show 3D segmentation results for a landmark not seen during training.

Color legend:  

nave,  

chapel,  

organ,  

altar,  

choir.

3dseg


Our technique allows for segmenting very small regions in space, such as the organ:

3dseg

Boosting Image Classification with a 3D-consistency Loss

We also performed a quantitative evaluation to quantify to what extent we can classify unseen images, both over landmarks which we use for training, and also over unseen landmarks, and also to see whether or not 3D-consistency helps classification performance (it does!).

3dseg

Please refer to our paper for additional qualitative and quantitative results.


BibTeX


      @inproceedings{Wu2021Towers,
       title={Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision},
       author={Wu, Xiaoshi and Averbuch-Elor, Hadar and Sun, Jin and Snavely, Noah},
       booktitle={ICCV},
       year={2021}
      }

Acknowledgements

This work was supported by the National Science Foundation (IIS-2008313), by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, by the Zuckerman STEM leadership program, and by an AWS ML Research Award.