Maps to Imagery using Spatially Adaptive Denormalization

6 0
  • 0 Collaborators

In this project I have made a deep learning generative model which uses a segmented land cover map and generates a imagery map out of it. It utilizes the latest SPADE normalization layer in the generative model to not which keeps the context of input land cover map. ...learn more

Project status: Published/In Market

Artificial Intelligence

Intel Technologies
AI DevCloud / Xeon, Intel Python

Code Samples [1]Links [1]

Overview / Usage

The problem of converting a land cover to imagery is very important in the field of urban planning as it requires a urban planner to see how the imagery map will look like if they make certain changes to the land cover map. This problem has commercial use cases and has been in demand by various GIS companies. It utilizes a latest technique called SPADE (https://arxiv.org/abs/1903.07291) which is a normalization layer which takes the input into account when performing the affine operations after the old normalization. It utilizes the famous Generative Adversarial Networks (GANs) to generate the intended output. The generator and discriminator are trained simultaneously and they both are trained till they become experts at their crafts. The weights in both generator and discriminator are both spectral normalized and are trained with a two time scale update rule. Perceptual loss is also optimised from the features of vgg19 backbone. A feature matching loss is also added as highlighted in the SPADE paper.

Methodology / Approach

Methodologies used: SPADE, GANs, Perceptual Loss, Hinge Loss, TTUR, Feature Matching Loss

Technologies Used

Intel MKL

Intel Python

Intel DevCloud

fastai

pytorch

torchvision

Repository

https://github.com/divyanshj16/SPADE

Comments (0)