In the era of data explosion, speech emotion plays crucial commercial significance. Emotion recognition in
speech encompasses a gamut of techniques starting from mechanical recording of audio signal to complex
modeling of extracted patterns. Most challenging part of this research purview is to classify the emotion of
the speech purely based on the physical characteristics of the audio signal independent of language of
speech. This paper focuses on the predictive modeling of audio speech data based on most viable feature
set extraction and deployment of these features to predict the emotion of unknown speech data. We have
used two most widely used classifiers, a variant of CART and Naïve Bayes, to model the dynamics of
interplay of crucial features like Root Mean Square (RMS), Zero Cross Rate (ZCR), Pitch and Brightness of
audio signal to determine the emotion of speech. In order to carry out comparative analysis of the proposed
classifiers, a set of experiments on real speech data is conducted. Results clearly indicate that decision tree
based classifier works well on accuracy whereas Naïve Bayes works fairly well on generality.