Skip to content
forked from chuanli11/CNNMRF

code for paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis"

License

Notifications You must be signed in to change notification settings

ReallyRad/CNNMRF

 
 

Repository files navigation

CNNMRF

This is the torch implementation for paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis"

This algorithm is for

  • un-guided image synthesis (for example, classical texture synthesis)
  • guided image synthesis (for example, transfer the style between different images)

Example

  • guided image synthesis

A photo (left) is transfered into a painting (right) using Picasso's self portrait 1907 (middle) as the reference style. Notice important facial features, such as eyes and nose, are faithfully kept as those in the Picasso's painting.

In this example, we first transfer a cartoon into a photo.

We then swap the two inputs and transfer the photo into the cartoon.

It is possible to balance the amount of content and the style in the result: pictures in the second coloumn take more content, and pictures in the third column take more style.

Setup

This code is based on Torch. It has only been tested on Mac and Ubuntu.

Dependencies:

Pre-trained network: We use the the original VGG-19 model. You can find the download script at Neural Style. The downloaded model and prototxt file MUST be saved in the folder "data/models"

Un-guided Synthesis

  • Run qlua run_syn.lua in a terminal. The algorithm will create a synthesis image of twice the size as the style input image.
  • The content/style images are located in the folders "data/content" and "data/style" respectively. Notice by default the content image is the same as the style image; and the content image is only used for initalization (optional).
  • Results are located in the folder "data/result/freesyn/MRF"
  • Parameters are defined & explained in "run_syn.lua".

Guided Synthesis

  • Run qlua run_trans.lua in a terminal. The algorithm will synthesis using the texture of the style image and the structure of the content image.
  • The content/style images are located in the folders "data/content" and "data/style" respectively.
  • Results are located in the folder "data/result/trans/MRF"
  • Parameters are defined & explained in "run_trans.lua".

Hardware

  • Our algorithm requires efficient GPU memory to run. A Titan X (12G memory) is able to complete the above examples with default setting. For GPU with 4G memory or 2G memory, please use the reference parameter setting in the "run_trans.lua" and "run_syn.lua"

Acknowledgement

  • This work is inspired and closely related to the paper: A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. The key difference between their method and our method is the different "style" constraints: While Gatys et al used a global constraint for non-photorealistic synthesis, we use a local constraint which works for both non-photorealistic and photorealistic synthesis. See our paper for more details.
  • Our implementation is based on Justin Johnson's implementation of Neural Style.

About

code for paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Lua 99.6%
  • Shell 0.4%