Sign language paves a way for mute people to communicate with each other. However, it is only possible for those who have undergone the special training to understand the language. Speech impaired people need to communicate with normal people on a daily basis. To bridge this communication gap, our project aims to develop a system for recognizing the sign language. Sign language uses intricate hand gestures to convey various words and phrases. Our system translates these finger spelling (signs) into voice in real-time, using flex-sensor based gesture gloves. Gesture data from the sensors on gloves is sent to the processing unit. Classification algorithms are used to match the sensor input to the pre-defined gesture to identify the word. Every sign gesture varies in time and space. Also, signing speed and position differs for every person. The goal is to compensate this variation in speed and position by training a set of gesture samples for every sign.