Deep Visual City Recognition Visualization

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

82 Downloads (Pure)

Abstract

Understanding how cities visually differ from each others is interesting for planners, residents, and historians. We investigate the interpretation of deep features learned by convolutional neural networks (CNNs) for city recognition. Given a trained city recognition network, we first generate weighted masks using the known Grad-CAM technique and to select the most discriminate regions in the image. Since the image classification label is the city name, it contains no information of objects that are class-discriminate, we investigate the interpretability of deep representations with two methods. (i) Unsupervised method is used to cluster the objects appearing in the visual explanations. (ii) A pretrained semantic segmentation model is used to label objects in pixel level, and then we introduce statistical measures to quantitatively evaluate the interpretability of discriminate objects. The influence of network architectures and random initializations in training, is studied on the interpretability of CNN features for city recognition. The results suggest that network architectures would affect the interpretability of learned visual representations greater than different initializations.
Original languageEnglish
Title of host publicationNCCV 2019 – The Netherlands Conference on Computer Vision
Pages1-6
Number of pages6
Publication statusPublished - 2019
EventNCCV 2019 – The Netherlands Conference on
Computer Vision
- Wageningen, Netherlands
Duration: 16 Dec 201917 Dec 2019

Conference

ConferenceNCCV 2019 – The Netherlands Conference on
Computer Vision
Abbreviated titleNCCV 2019
Country/TerritoryNetherlands
CityWageningen
Period16/12/1917/12/19

Fingerprint

Dive into the research topics of 'Deep Visual City Recognition Visualization'. Together they form a unique fingerprint.

Cite this