Providing a Voting-Based Method for Combining Deep Neural Network Outputs to Layout Analysis of Printed Documents

Document Type : Research Paper

Authors

1 Computer Engineer, Shahrood University of Technology, Shahrood, Iran

2 Department of Computer Engineering, Shahrood University of Technology

Abstract

In the last few decades, a lot of research has been done in the field of OCR or optical character recognition. Optical character recognition is one of the ways to convert text images to editable text and recognize letters and words automatically. Recognizing textual and non-textual areas within a document is known as document layout analysis, and is one of the key steps in the process of converting a document image to editable text. Separating textual and non-textual areas within an image is one of the most effective possible preprocesses in optical character recognition systems. The lack of the same template on all pages, the presence of complex backgrounds, different kinds of noises, low quality, image rotation, and the existence of more than one text column prevent the correct recognition of areas containing text. Failure to correctly recognize areas containing text and, consequently, incorrect recognition of line coordinates will disrupt all subsequent parts of an optical character recognition system. In this research, a new method has been proposed to recognize textual areas within the image. The proposed method, using various methods and using a voting system among them, extracts the textual areas of the image. The proposed method has been trained and tested on a dataset with more than 950 images and reached 97.94% accuracy. The presented dataset in this article is open access.

Keywords