Facial expression is significant for face-to-face communication since it is one of our body language that increases data information during the communication. In recent surveys, it found that feature extraction methods have influenced on facial recognition directly and they can outperform if some irrelevant features are eliminated. Consequently, a procedure of facial expression recognition using graph-based features and artificial neural network is proposed in this thesis, and the procedure can be divided into 2 phases. For the first phase, fourteen points are manually located to create graph with edges connecting among such points, followed by computation of the Euclidean distances from those edges to define them as features for training in the next phase. For the next phase, Multilayer Perceptrons with back-propagation learning algorithm is implemented to recognize six basic emotions from the corresponding feature vectors. To evaluate the performance, Cohn-Kanade AU-Coded facial expression database is applied to the recognition system under various kinds of feature and training technique, and the experimental results have shown that MLP (Graph-based features) without validation and MLP (Graph-based feature) with cross validation can achieve the highest recognition rate (95.24%). Therefore, it has illustrated that the combination of graph-based features and artificial neural network is the efficient way for the facial expression recognition.