Abstract:
Inspired by the recent work in language modeling, we investigate the effects of a set of regularization techniques on the performance of a recurrent neural network based image captioning model. Using these techniques, we achieve 13 Bleu-4 points improvements over using no regularizations. We show that our model does not suffer from loss-evaluation mismatch and also connect the model performance to dataset properties by running experiments on MSCOCO dataset. Further, we propose two different applications for our image captioning model, namely human in the loop system and zero shot object detection. The former application further improves CIDEr score of our best model by 30 points using only the first two tokens of a reference sentence of an image. In the latter one, we train our image captioning model as an object detector which classifies each objects in an image without finding their location. The main advantage of this detector is that it does not require object locations during the training phase.