The animation industry is growing drastically. This results in increasing demand for better performance and reduction of process time and cost. One of the most important processes is lip synchronization. Generally the lip synchronization in character animation is done in the animation development process. In this research, we consider the problem of making lip movement for an animated talking character. We focus on to reducing the cost and workload in the animation development process, and apply this technique for use with Thai speech. The main idea is to extract and capture a viseme from the video of a human talking and the phonemic scripts inside this video. First, this approach starts with separating the human talking video into two parts that contains the speech and frame sequence, then uses speech combined with phonemic script to extract time-stamp of each phoneme by using force-alignment techniques; next, we create a visyllable database by mapping an end time of each selected phoneme to an image; then, we capture an interested position from the image to make a visyllable database; after that, we generate a talking head animation video by synchronizing a time-stamped of each phoneme to concatenated visemes. The output result of this research is the animation model that the animated talking character can move synchronously with the speech. The experiment reported, indicating good accuracy of the synchronized lip movement with the speech, compared to the artist-animated talking character.