Colorization Project Details
This page describes the process and results of creating a colorization model that takes in a black and white image and outputs the image in color. For this project, I will be using Tensorflow via the Keras API, and a generator and discriminator model to compete and give us the best-colorized output.
Here are a few examples of real black and white images colorized by my GAN Model.
Quick Summary of Model Creation
To train the model, first, the pixel values of a black and white image were correlated to the pixel values of its full-color counterpart. One problem with this is the dimensionality of the images, for example, black and white pixel values are one dimensional while full-color images are three dimensional (in the RGB color space). To make the process more rigid for the model, instances of grayscale images in the original web-scraped full-color dataset were removed, and instances of a CMYK four-dimensional color image were removed.
Read in a black and white image, breaking it down into specific pixel values
For each point, predict the appropriate apoint of Red, Green, and Blue values
Combine RGB color data to output a unified full color image from a black and white picture
Zipped File Layout
This script creates another folder, taking in full color images and saving them as grayscale
This script loads in the saved model created from the earlier script and outputs predictions
This text contains important information and dives more into the technicalities of the code
Here, we analyze picture data and train a Tensorflow model, saving it at the end
This folder contains all the full-color training images used in the model
This folder contains Tensorflow model weights for the discriminator model (provided)
This folder contains Tensorflow model weights for the generator model (provided)
Information on how to use and run code in README.md
Areas of Model Improvement
Due to time limitations and the computational power required to produce a GAN-based model, I was only able to train this model on limited layer depth and structure, downscaled images to 120 by 120px, and on just five epochs. It took over 6 hours to train. Given a more powerful system, significant model improvements and accuracy can be expected.
BONUS: Modulating Color Temperature or Mood
An image's color temperature or mood can be changed based on the amount of red/blue light. Increasing the intensity of these pixel values can make the image feel warmer or cooler, setting the mood of the image. More red means warmer and more blue means cooler.
A full color image has pixel values for Red, Green and Blue light
Increasing the magnitude of the red light values makes the image warmer
Increasing the magnitude of the blue light values makes the image cooler
I had so much fun taking on this project and learning more about digital picture representation! I hope you guys enjoyed reading these findings. Please let me know if you have any questions :)