Skip to content
/ NaSNet Public

We present an end-to-end deep neural network that is able to transform the feature maps extracted from the aerial image to the street view, which alleviates the challenge of matching images from two very different view perspectives.

Notifications You must be signed in to change notification settings

samwuko/NaSNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

we proposean end-to-end convolutional neural network (CNNs)architecture, termed NAdir-view to Street-view NETwork (NaSNet), to learn a transformation between street-view and overhead-view images. The network extracts features from both street-view and aerial images and learns adaptive transformation matrices to map the aerial image feature maps to the corresponding street-view image feature maps. The pixel-wise difference between the transformed feature maps and the “ground truth” feature maps, which are directly extracted from the matching street-view images, are minimized using the Huber loss.

Network Architecture

NaSNat

Requirements

Experiment Dataset

  • CVUSA datset: a dataset in America, with pairs of ground-level images and satellite images. All ground-level images are panoramic images.
    The dataset can be accessed from https://github.com/viibridges/crossnet

Running


Training:

$ python main.py

About

We present an end-to-end deep neural network that is able to transform the feature maps extracted from the aerial image to the street view, which alleviates the challenge of matching images from two very different view perspectives.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages