Euclidean Distance Based Classifier for Recognition and Generating Kannada Text Description from Live Sign Language Video

— Sign language recognition has emerged in concert of the vital space of analysis in computer Vision. The problem long-faced by the researchers is that the instances of signs vary with each motion and look. Thus, during this paper a completely unique approach for recognizing varied alphabets of Kannada linguistic communication is projected wherever continuous video sequences of the signs are thought of. The system includes of three stages: Pre-processing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, bar histogram matching. Eigen values and Eigen Vectors were thought of for feature extraction stage and at last Eigen value weighted Euclidean distance is employed to acknowledge the sign. It deals with vacant hands, so permitting the user to act with the system in natural manner. We have got thought of completely different alphabets within the video sequences and earned a hit rate of 95.25%.


Introduction
A sign language could be a language within which communication between individuals square measure created by visually sending the sign patterns to specific that means. It's a replacement of speech for hearing and speech impaired individuals. Thus, due to that it's attracted several researchers during this field from long. Several researchers are operating in numerous sign languages like yank signing, British signing, Taiwanese signing, etc. however few works has been created progress on Kannada signing. The hearing impaired individual becomes neglected from the society as a result of the traditional individuals ne'er attempt to learn neither ISL nor attempt to move with the hearing impaired individuals. This becomes a curse for them and then they largely stay uneducated and isolated. Therefore recognition of signing was introduced that has not solely been necessary from engineering purpose of read however conjointly for the impact on society.
The paper aims to bridge the gap between North American country and therefore the hearing impaired individuals by introducing an affordable signing Recognition technique which is able to enable the user to grasp the means of the sign while not the assistance of any skilled translator. Computers square measure utilized in communication path that helps in capturing of the signs, process it and at last recognizing the sign. Several techniques are utilized by completely different researchers for recognizing sign languages or different hand gestures. Some researchers worked with static hand gestures, whereas some worked with video and real time. Singha, J. and Das, K.
[1] worked with static pictures and this paper, Karhunen-Loeve remodel was used for recognition of various signs however was restricted to gestures of solely single hand. Accuracy rate obtained was 96%. Bhuyan achieved a success rate of 93% in his paper wherever he used undiversified Texture Descriptors to calculate the inflexed positions of fingers and abduction angle variations were conjointly thought of options in were extracted mistreatment Gabor filter and PCA and ANN used for recognition of the Ethiopian linguistic communication with a high success rate of ninety eight.5%. Bhuyan, M. et.al [2] In his paper used Cam shift formula for chase of hand and for options, Hauds off Distance and Fourier Descriptor were thought of. Recognition was achieved mistreatment Genetic formula. Admasu, Y. F. and Raimond, K. [3], Kannada linguistic communication was recognized mistreatment Eigen price weighted geometrician distance based mostly classifier with associate accuracy rate of ninety seven. It removed the problem faced by for gestures mistreatment each hands.
Many analysis works [4,5] has been through with the video and real time. Chou used HMM for recognition of hand gestures consisting of each hands with associate accuracy rate of ninety four. Neural Network based mostly options and Hidden Mark Model was employed in [6] for recognizing varied hand gestures in video. Starner, T. and Pentland, A [7] used Hidden Mark Model for recognition of yank linguistic communication and achieved successful rate of ninety nine his work was restricted to colored gloves. Paulraj, M. P et.al [8] skin filtering, moment invariants based mostly options on with ANN was used for recognition of various gestures with successful rate of 92.85%. Were works done to acknowledge Taiwanese sign. Liang, R. H. and Ouhyoung, M [9] Used Hidden Mark Model in real time. However their work was restricted to use of knowledge gloves and recognition of single hand gestures with an occasional accuracy rate of eighty four. Tsai, B. L. and Huang, C. L. [10] constant issue of mistreatment colored gloves was gift however each static and dynamic hand gestures might be recognized mistreatment Support vector machines and Hidden Marko Model.
Thus we have a tendency to propose a special purpose image process formula supported Eigenvector to acknowledge varied signs of Kannada linguistic communication for live video sequences with high accuracy. Varied difficulties faced by totally different researchers are tried to attenuate with our approach. Recognition rate of 96.25% was achieved. The experiment was applied with clean hands, therefore removing the problem faced mistreatment the gloves. We have got extended our work [11] for video sequence during this paper.
Ramesh M. Kagalkar and Dr. S.V. Gumaste, [12] reviews the intensive state of the art in automatic recognition of continuous signs, from different languages, supported the information sets used, features computed, technique used, and recognition rates achieved. In this paper discover that, in the past, most work has been tired fingerspelled words and isolated sign recognition, but recently, there has been vital progress within the recognition of signs embedded briefly continuous sentences. Paper tend to conjointly realize that researchers are getting down addressing the necessary downside of extracting and integration non-manual data that is gift in face and head movement and present results from experiments integration of non-manual options.
Amit kumar and Ramesh Kagalkar [13] paper is going to construct a framework and techniques for the programmed acknowledgment of the Marathi communication via gestures. Through giving instructing classes to the reason for preparing the hard of hearing sign client in Marathi. The framework does oblige hand to be appropriately adjusted to the camera and does not require any wearable sensors. A substantial arrangement of tests has been utilized as a part of the framework to perceive confined words from the standard Marathi communication through signing, which are taken before the camera with distinctive hard of hearing sign client. The framework perceive some extremely essential components of gesture based communication and to make an interpretation of them to content and the other way around. It utilizing 46 Marathi letters in order for acknowledgment.
Amit kumar and Ramesh Kagalkar [14] paper address the hand gesture recognition system can provide an opportunity for deaf persons to communicate with normal people without the need of an interpreter or intermediate. Work is going to build a systems and methods for the automatic recognition of Marathi sign language. Through that we are providing teaching classes for the purpose of training the deaf sign user in Marathi. The system does require hand to be properly aligned to the camera and does not need any special color markers, glove or wearable sensors. A large set of samples has been used in proposed system to recognize isolated words from the standard Marathi sign language which are taken in front of camera by different deaf sign user. We intend to recognize some very basic elements of sign language and to translate them to text.
Amit kumar and Ramesh Kagalkar [15] this paper presents an Automatic translation system for gesture of manual alphabets in Marathi sign language. It deals with images of bare hands, which allows the user to interact with the system in a natural way. Here system provides an opportunity for deaf persons to communicate with normal people without the need of an interpreter. The first step of this system is to create a database of Marathi sign language. Hand segmentation is the most crucial step in every hand gesture recognition system since if we get better segmented output, better recognition rates can be achieved. The work also includes efficient and robust hand segmentation and tracking algorithm to achieve better recognition rates. A large set of samples has been used to recognize 43 isolated words from the standard Marathi sign language.

2
System Overview

KSL Alphabets
The Kannada linguistic communication was developed so the deaf individual within the society will move with the conventional individual with none difficulties [19].
Here during this paper, we have got thought-about the alphabets of KSL that involves the utilization of either single hand or each hands. A complete of 554 alphabets were thought-about that is shown in Fig. 1 The Kannada along with other Indian language scripts shares a large number of structural features. The writing system of Kannada script encompasses the principles governing the phonetics and a syllabic writing systems, and phonemic writing systems (alphabets). The effective unit of writing Kannada is the orthographic syllable consisting of a consonant and vowel (CV) core and optionally, one or more preceding consonants, with a canonical structure of ((C) CV. The orthographic syllable need not correspond exactly with a phonological syllable, especially when a consonant cluster is involved, but the writing system is built on phonological principles and tends to correspond quite closely to pronunciation [21]. The orthographic syllable is built up of alphabetic pieces, the actual letters of Kannada script. These consist of distinct character types: Consonant letters, independent vowels and the corresponding dependent vowel signs. In a text sequence, these characters are stored in logical phonetic order.
1. Vowels (Swaragalu/Swaras) Vowels are the independently existing letters which are called Swaras. They are,    The proposed system is shown in Fig. 4 that contains of three major stagespreprocessing stage which incorporates the skin filtering and bar graph matching to seek out the similarity between frames, Feature Extraction stage during which the chemist prices and Eigen vector area unit being thought-about as options and eventually Eigen value weighted Euclidian distance based mostly classification technique as utilized in the main points of every stage are going to be mentioned within the following sections.

Preprocessing of Sign
Data Acquisition. The first step for our projected system is that the capturing of the video mistreatment digital camera wherever completely different alphabets were taken into thought 52 completely different alphabets were thought of for testing from twenty individuals A number of the continual video frames captured square measure given in Fig.5. Detection of Hand Gestures. Skin Filtering was performed to the input video frames for detection of hand gestures. It had been done in order that the specified hand can be extracted from the background. Skin Filtering could be a technique used for separating the skin colored regions from the non-skin colored regions. The steps employed in this skin filtering are shown in Fig. 6. It first, the input frame was reborn to HSV color area. This step was taken as a result of HSV color area was less sensitive to illumination changes compared to RGB. Then it absolutely was filtered, ironed and eventually the largest binary coupled object was being thought-about therefore on avoid thought of skin colored objects aside from hand. The resultant image may be a binary image with hand regions in white and background in black color. The filtered hand is then observed.
Histogram Matching. After extracting out the skin colored regions from the background, bar graph matching is finished within the next step. They following steps describe the method of bar graph matching: Step 1: The histograms of all the frames of the video area unit discerned.
Step 2: The similarities of the consecutive frames area unit checked by sorting out the distinction of their histogram. Difference (n) = Hist (n-1) Where, Hist represents histogram and n represents current frame.
Step 3: If the distinction is found to be higher than an explicit threshold, they're thought of as similar. This distinction is seen for 'n' variety of frames. We have chosen the brink 'n' to be 17.
Step 4: If all the 'n' frames show similarities, then it's thought of to be associate degree unidentified sign and more steps of feature extraction and classification is carried on.
Feature Extraction. Feature Extraction stage is important as a result of sure options must be extracted in order that they're distinctive for every gesture or sign. When the choice is formed that an indication is gift, and then the last frame is taken into thought and options like chemist chemist} values and Eigen vectors are extracted from that frame. The procedure to calculate the options and Eigen vector are given in as follows: Step 1: Frame resizing-allow us to assume the last frame is 'X'. 'X' is resized to 70 by 70.
Step 2: Mean and variance calculation-Mean 'M' and variance 'C' is calculated as given in [1].

M = E {X}
(2) Step 3: The Manfred values and Eigen vectors square measure calculated from the higher than variance 'C' and therefore the Eigen vectors square measure organized in such a fashion that the Eigen values square measure in drizzling order.
Step 4: Information compression-out of seventy chemist vectors solely initial five principle vectors were thought of, so reducing the dimension of the matrix.
Classification. After the features like eign values and eigen vectors are extracted from the last frame, ensuring stage is to check it with the options of the signs already gift within the info for classification purpose. It had been achieved by considering chemist worth weighted geometer distance primarily based classification technique as in. The steps for our classification technique square measure delineated as follows: Step 1: Calculation of Euclidian Distance-impotency was recognized between the Eigen chemist vectors calculated from the take a look at frame of the video and also the Eigen vectors.

ED (4)
Where VT is the Eigen vector of the test frame and VD is the Eigen vector of the data base image.
Step 2: Calculation of Eigen price distinction-The difference between the Eigen price of the information pictures and the Eigen price of this video frame was detected.
Step 3: The on top of distinction was then increased with the Euclidian distance obtained.
Step 4: After the on top of operation was meted out, the results obtained for every image was superimposed. Once addition, the minimum of all was checked. The minimum diagrammatic the recognized image.

Experimental Results and Analysis
The Kannada signing recognition approach was enforced mistreatment Java as software package and Intel® Pentium® hardware B950 @ a pair of 10GHz processor machine, Windows seven Home basic (64 bit), 4GB RAM and a web cam with resolution of 320x240.

Data set and Parameters Considered
The data set used for coaching the popularity system consisted of twenty four signs of ISL for 20 individuals. So a complete of 480 pictures was kept in information. We tend to had tested our system with 20 videos and achieved an honest success in it. One parameter was thought-about in system i.e. the edge 'n' that is that the range of frames it's to visualize for similarity to work out whether or not it absolutely was a signal or not. Table 1 describes one in all the video frame and its results obtained victimization manfred eigen price weighted geometer distance based mostly classification technique for few pictures. Similar procedure is administered for different video frames. The overall recognition rate was calculated and located to be 96.25%.

Conclusion and Future Work
A fast, novel and robust system was proposed for recognition of different alphabets of Kannada Sign Language for video sequences. Skin filtering was used for detection of hands, eigen vectors and eigen values were considered as the features and finally effective classification was achieved using eigen value weighted euclidean distance based classifier. Features like good accuracy, use of bare hands, recognition of both single and both hand gestures, working with video were achieved by us when compared to other related works. Table 3 describes the comparison of our work with the other related works done. We have extended our work from static image recognition of ISL to live video recognition. In future, we will try to extend our work in real time with better accuracy. And attempts will be made to extend the work towards more words and sentences.