TY - JOUR ID - 108484 TI - Learning Document Image Features With SqueezeNet Convolutional Neural Network JO - International Journal of Engineering JA - IJE LA - en SN - 1025-2495 AU - Hassanpour, M. AU - Malek, H. AD - Department of Computer Science Engineering, Shahid Beheshti University, Tehran, Iran Y1 - 2020 PY - 2020 VL - 33 IS - 7 SP - 1201 EP - 1207 KW - Squeezenet KW - convolutional neural network KW - Document image classification DO - 10.5829/ije.2020.33.07a.05 N2 - The classification of various document image classes is considered an important step towards building a modern digital library or office automation system. Convolutional Neural Network (CNN) classifiers trained with backpropagation are considered to be the current state of the art model for this task. However, there are two major drawbacks for these classifiers: the huge computational power demand for training, and their very large number of weights. Previous successful attempts at learning document image features have been based on training very large CNNs. SqueezeNet is a CNN architecture that achieves accuracies comparable to other state of the art CNNs while containing up to 50 times less weights, but never before experimented on document image classification tasks. In this research we have taken a novel approach towards learning these  document image features by training on a very small CNN network such as SqueezeNet. We show that an ImageNet pretrained SqueezeNet achieves an accuracy of approximately 75 percent over 10 classes on the Tobacco-3482 dataset, which is comparable to other state of the art CNN. We then visualize saliency maps of the gradient of our trained SqueezeNet's output to input, which shows that the network is able to learn meaningful features that are useful for document classification. Previous works in this field have made no emphasis on visualizing the learned document features. The importance of features such as the existence of handwritten text, document titles, text alignment and tabular structures in the extracted saliency maps, proves that the network does not overfit to redundant representations of the rather small Tobacco-3482 dataset, which contains only 3482 document images over 10 classes. UR - https://www.ije.ir/article_108484.html L1 - https://www.ije.ir/article_108484_b61cf4598ecfd2239b0d1a641922f04d.pdf ER -