Hidden Footprints

Hidden Footprints: Learning Contextual Walkability from 3D Human Trails

Supplementary Material

Please click on the tabs below for: examples of the manually labeled set (Manual Labels), examples on the Cityscapes data (Cityscapes Results), generalization beyond humans (Learning on Cars), and additional implementation details of our system (Additional Details).



We show examples of our manually labeled walkable regions in Waymo Open Dataset's validation set.

White regions represent valid walkable regions. Note that we ignore regions where people may be jaywalking on.

We show additional examples on the Cityscapes dataset.

Our model produces reasonable predictions on a wide variety of scenes, without any finetuning on this dataset.

Our framework is flexible and supports objects beyond humans. Here we show additional results of cars on predicting drivable regions and directions.

Similar to the human footprints propagation process, we collect 3D labeled cuboids of cars in the Waymo dataset and project their locations and heading directions to all frames in a capture sequence. We then train separate networks using the same training procedure described in the main paper on these two tasks.

In the following we show examples of the predicted regions of where a car can be driven (middle column), and the driving direction of a car on road segments (right column). Directional arrows are color-coded using a popular optical-flow visualization scheme.

Notice the predicted directions are adaptive to a road's configuration: for example a diverse set of directions at an intersection (first row), both forward and backward directions in a two-way street (second row and last row), and the movement surrounding trees in the middle of a street (third row). Also, our drivable regions prediction is not only about roads but also lanes (second row): cars usually don't drive in the median of a two-way street.