Computer Vision Nanodegree

Project: Image Captioning


In this notebook, you will train your CNN-RNN model.

You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.

This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:

  • the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook.
  • the output of the code cell in Step 2. The output should show the output obtained when training the model from scratch.

This notebook will be graded.

Feel free to use the links below to navigate the notebook:

Step 1: Training Setup

In this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in Step 2 below.

You should only amend blocks of code that are preceded by a TODO statement. Any code blocks that are not preceded by a TODO statement should not be modified.

Task #1

Begin by setting the following variables:

  • batch_size - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step.
  • vocab_threshold - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary.
  • vocab_from_file - a Boolean that decides whether to load the vocabulary from file.
  • embed_size - the dimensionality of the image and word embeddings.
  • hidden_size - the number of features in the hidden state of the RNN decoder.
  • num_epochs - the number of epochs to train the model. We recommend that you set num_epochs=3, but feel free to increase or decrease this number as you wish. This paper trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (But of course, if you want your model to compete with current research, you will have to train for much longer.)
  • save_every - determines how often to save the model weights. We recommend that you set save_every=1, to save the model weights after each epoch. This way, after the ith epoch, the encoder and decoder weights will be saved in the models/ folder as encoder-i.pkl and decoder-i.pkl, respectively.
  • print_every - determines how often to print the batch loss to the Jupyter notebook while training. Note that you will not observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of 100 to avoid clogging the notebook, but feel free to change it.
  • log_file - the name of the text file containing - for every step - how the loss and perplexity evolved during training.

If you're not sure where to begin to set some of the values above, you can peruse this paper and this paper for useful guidance! To avoid spending too long on this notebook, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (3_Inference.ipynb). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in model.py) and re-train your model.

Question 1

Question: Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.

Answer:

  • The Encoder consists of pretrained resnet model with the Last Densed(Linear) Layer changed as convenient for the decoder.
  • The Decoder consists of embedded layer with a single layer of Long Short Term Memory(LSTM) and a Densed(Linear) layer
  • The Encoder creates the context then the Decoder uses it create the Caption

(Optional) Task #2

Note that we have provided a recommended image transform transform_train for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:

  • the images in the dataset have varying heights and widths, and
  • if using a pre-trained model, you must perform the corresponding appropriate normalization.

Question 2

Question: How did you select the transform in transform_train? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?

Answer: I left the transform at its provided value. I thing that the RandomCrop and RandomHorizontalFlip is enough for image agumentation. The normalize value is same that of Resnet's spacified. That's why I left the transform as it is

Task #3

Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set params to something like:

params = list(decoder.parameters()) + list(encoder.embed.parameters())

Question 3

Question: How did you select the trainable parameters of your architecture? Why do you think this is a good choice?

Answer:

  • Batch Size I set the Batch size as 128. I am training the model in udacity's VM. So I thought that the VM can take that much memory 128 and the model will train faster.
  • vocab_threshold Looking at the previous notebook and some training example, I thought 5 would be a good choice for vocab threshold
  • embed_size I set it embed_size as 512.I think that the length to 512 in the context is good for a good prediction.
  • hidden_size I set the hidden_size also as 512. Beacuse the more the no of features to the last funtional layer, the more complex pattern it can take. The result will be more accrute.

Task #4

Finally, you will select an optimizer.

Question 4

Question: How did you select the optimizer used to train your model?

Answer: For Optimization, I use Adam optimizer, because as much as knowledge I have, Adam works best for Training CNN-RNN models for its adaptive learning rate method. So for large models, Adam works better and more quickly than other optimizer like SGD, RMSProp etc.

In [1]:
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math


## TODO #1: Select appropriate values for the Python variables below.
batch_size = 128          # batch size
vocab_threshold = 5        # minimum word count threshold
vocab_from_file = True    # if True, load existing vocab file
embed_size = 512           # dimensionality of image and word embeddings
hidden_size = 512          # number of features in hidden state of the RNN decoder
num_epochs = 3             # number of training epochs
save_every = 1             # determines frequency of saving model weights
print_every = 100          # determines window for printing average loss
log_file = 'training_log.txt'       # name of file with saved training loss and perplexity

# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([ 
    transforms.Resize(256),                          # smaller edge of image resized to 256
    transforms.RandomCrop(224),                      # get 224x224 crop from random location
    transforms.RandomHorizontalFlip(),               # horizontally flip image with probability=0.5
    transforms.ToTensor(),                           # convert the PIL Image to a tensor
    transforms.Normalize((0.485, 0.456, 0.406),      # normalize image for pre-trained model
                         (0.229, 0.224, 0.225))])

# Build data loader.
data_loader = get_loader(transform=transform_train,
                         mode='train',
                         batch_size=batch_size,
                         vocab_threshold=vocab_threshold,
                         vocab_from_file=vocab_from_file)

# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)

# Initialize the encoder and decoder. 
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)

# Move models to GPU if CUDA is available. 
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)

# Define the loss function. 
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()

# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters()) 

# TODO #4: Define the optimizer.
optimizer = torch.optim.Adam(params=params, lr = 0.001)

# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
Vocabulary successfully loaded from vocab.pkl file!
loading annotations into memory...
Done (t=0.91s)
creating index...
  0%|          | 772/414113 [00:00<01:54, 3616.72it/s]
index created!
Obtaining caption lengths...
100%|██████████| 414113/414113 [01:34<00:00, 4396.91it/s]
In [2]:
import os
encoder_file = 'encoder-4.pkl'
decoder_file = 'decoder-4.pkl'
optim_file = 'optim-4.pkl'

# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
optimizer.load_state_dict(torch.load(os.path.join('./models', optim_file)))

Step 2: Train your Model

Once you have executed the code cell in Step 1, the training procedure below should run without issue.

It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works!

You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (encoder_file and decoder_file). Then you can load the weights by using the lines below:

# Load pre-trained weights before resuming training.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))

While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :).

A Note on Tuning Hyperparameters

To figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information.

However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models.

For this project, you need not worry about overfitting. This project does not have strict requirements regarding the performance of your model, and you just need to demonstrate that your model has learned something when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (3_Inference.ipynb) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.

That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of this paper. In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.

In [3]:
import torch.utils.data as data
import numpy as np
import os
import requests
import time

# Adding the Previous epochs
previous_epoch = 4
num_epochs = 5


# Open the training log file.
f = open(log_file, 'w')

old_time = time.time()
response = requests.request("GET", 
                            "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token", 
                            headers={"Metadata-Flavor":"Google"})

for epoch in range(1+previous_epoch, num_epochs+1):
    
    for i_step in range(1, total_step+1):
        
        if time.time() - old_time > 60:
            old_time = time.time()
            requests.request("POST", 
                             "https://nebula.udacity.com/api/v1/remote/keep-alive", 
                             headers={'Authorization': "STAR " + response.text})
        
        # Randomly sample a caption length, and sample indices with that length.
        indices = data_loader.dataset.get_train_indices()
        # Create and assign a batch sampler to retrieve a batch with the sampled indices.
        new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
        data_loader.batch_sampler.sampler = new_sampler
        
        # Obtain the batch.
        images, captions = next(iter(data_loader))

        # Move batch of images and captions to GPU if CUDA is available.
        images = images.to(device)
        captions = captions.to(device)
        
        # Zero the gradients.
        decoder.zero_grad()
        encoder.zero_grad()
        
        # Pass the inputs through the CNN-RNN model.
        features = encoder(images)
        outputs = decoder(features, captions)
        
        # Calculate the batch loss.
        loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
        
        # Backward pass.
        loss.backward()
        
        # Update the parameters in the optimizer.
        optimizer.step()
            
        # Get training statistics.
        stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
        
        # Print training statistics (on same line).
        print('\r' + stats, end="")
        sys.stdout.flush()
        
        # Print training statistics to file.
        f.write(stats + '\n')
        f.flush()
        
        # Print training statistics (on different line).
        if i_step % print_every == 0:
            print('\r' + stats)
            
    # Save the weights.
    if epoch % save_every == 0:
        torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
        torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
        # saving the optimizer for future use
        torch.save(optimizer.state_dict(), os.path.join('./models', 'optim-%d.pkl' % epoch))

# Close the training log file.
f.close()
Epoch [5/5], Step [100/3236], Loss: 1.9175, Perplexity: 6.8037
Epoch [5/5], Step [200/3236], Loss: 1.8869, Perplexity: 6.59908
Epoch [5/5], Step [300/3236], Loss: 1.9144, Perplexity: 6.78322
Epoch [5/5], Step [400/3236], Loss: 1.7649, Perplexity: 5.84111
Epoch [5/5], Step [500/3236], Loss: 2.0280, Perplexity: 7.598939
Epoch [5/5], Step [600/3236], Loss: 1.9238, Perplexity: 6.84693
Epoch [5/5], Step [700/3236], Loss: 1.8280, Perplexity: 6.22158
Epoch [5/5], Step [800/3236], Loss: 2.7296, Perplexity: 15.3260
Epoch [5/5], Step [900/3236], Loss: 1.8622, Perplexity: 6.43770
Epoch [5/5], Step [1000/3236], Loss: 1.8301, Perplexity: 6.2344
Epoch [5/5], Step [1100/3236], Loss: 1.8939, Perplexity: 6.64526
Epoch [5/5], Step [1200/3236], Loss: 1.8077, Perplexity: 6.09668
Epoch [5/5], Step [1300/3236], Loss: 1.7351, Perplexity: 5.66947
Epoch [5/5], Step [1400/3236], Loss: 2.1690, Perplexity: 8.74993
Epoch [5/5], Step [1500/3236], Loss: 1.6517, Perplexity: 5.21595
Epoch [5/5], Step [1600/3236], Loss: 1.9635, Perplexity: 7.12422
Epoch [5/5], Step [1700/3236], Loss: 1.8442, Perplexity: 6.32334
Epoch [5/5], Step [1800/3236], Loss: 1.7295, Perplexity: 5.63811
Epoch [5/5], Step [1900/3236], Loss: 1.8184, Perplexity: 6.16213
Epoch [5/5], Step [2000/3236], Loss: 1.8993, Perplexity: 6.68157
Epoch [5/5], Step [2100/3236], Loss: 1.9258, Perplexity: 6.86080
Epoch [5/5], Step [2200/3236], Loss: 1.8723, Perplexity: 6.50303
Epoch [5/5], Step [2300/3236], Loss: 1.8289, Perplexity: 6.22722
Epoch [5/5], Step [2400/3236], Loss: 2.2848, Perplexity: 9.82331
Epoch [5/5], Step [2500/3236], Loss: 1.7412, Perplexity: 5.70409
Epoch [5/5], Step [2600/3236], Loss: 1.8765, Perplexity: 6.53046
Epoch [5/5], Step [2700/3236], Loss: 2.1098, Perplexity: 8.24642
Epoch [5/5], Step [2800/3236], Loss: 1.8094, Perplexity: 6.10699
Epoch [5/5], Step [2900/3236], Loss: 2.1542, Perplexity: 8.62083
Epoch [5/5], Step [3000/3236], Loss: 1.8256, Perplexity: 6.20669
Epoch [5/5], Step [3100/3236], Loss: 1.6877, Perplexity: 5.40711
Epoch [5/5], Step [3200/3236], Loss: 1.8843, Perplexity: 6.58205
Epoch [5/5], Step [3236/3236], Loss: 1.8023, Perplexity: 6.06347

Step 3: (Optional) Validate your Model

To assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this optional task, you are required to first complete all of the steps in the next notebook in the sequence (3_Inference.ipynb); as part of that notebook, you will write and test code (specifically, the sample method in the DecoderRNN class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here.

If you decide to validate your model, please do not edit the data loader in data_loader.py. Instead, create a new file named data_loader_val.py containing the code for obtaining the data loader for the validation data. You can access:

  • the validation images at filepath '/opt/cocoapi/images/train2014/', and
  • the validation image caption annotation file at filepath '/opt/cocoapi/annotations/captions_val2014.json'.

The suggested approach to validating your model involves creating a json file such as this one containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you find online to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of this paper. For more information about how to use the annotation file, check out the website for the COCO dataset.

In [ ]:
 
In [ ]:
# (Optional) TODO: Validate your model.