A LVQ-BASED TEMPORAL TRACKING FOR SEMI-AUTOMATIC VIDEO OBJECT SEGMENTATION

This paper presents a Learning Vector Quantization (LVQ)-based temporal tracking method for semi-automatic video object segmentation. A semantic video object is initialized using user assistance in a reference frame to give initial classification of video object and its background regions. The LVQ training approximates video object and background classification and use them for automatic segmentation of the video object on the following frames thus performing temporal tracking. For LVQ training input, we sampling each pixel of a video frame as a 5-dimensional vector combining 2-dimensional pixel position (X,Y) and 3-dimensional HSV color space. This paper also demonstrates experiments using some MPEG-4 standard test video sequences to evaluate the accuracy of the proposed method.


INTRODUCTION
Nowadays, extracting the shape information of semantic objects from video sequences is the key operation for multimedia content description, content-based representation (Sikora 1997), image retrieval, multimedia database, movie manipulation etc.An example of video object segmentation is the application in movie special effect, where seamless integration of natural video objects with synthetic elements (like cartoons) requires flexible video representation schemes.However, a video sequence does not provide the shape information of its semantic object.Recent developments in video object segmentation lead to two types of algorithms, i.e. automatic segmentation (Guo and Kim 1999), and semi-automatic segmentation (Gu and Lee 1998;Castagno and Kunt 1998).Automatic segmentation tracks the object by using some invariant parameters like color, pattern and motion.The main problem of this method is the difficulty for automatically segmenting the semantically meaningful object.Tekalp (Bovik 1998) said that until now, there is no guarantee that any of the resulting automatic segmentation will be semantically meaningful, since a semantically meaningful object may have multiple colors, pattern and multiple motions.
There is still some possibility of achieving fully automatic motion segmentation even though the accuracy will be limited.Therefore, semantically meaningful video object segmentation generally requires user interaction to define the object of interest in at least one key frame.Due to the ill-posed definition of a semantically meaningful object itself, semi-automatic segmentation methods that incorporate the user's interaction become more popular.In semi-automatic algorithms, human assistance is required to identify semantic objects of interest to the segmentation system.These object of interest regions are tracked temporally for each of the following frames based on the segmentation result obtained from its previous frame.This paper employs semi-automatic segmentation method, which incorporates human assistance to define the semantic video objects.Until now, only human knows the meaning of "semantic", there is no computer program that can fully understand the meaning of semantic.Therefore, human can give the information of semantic objects regions to the computer program for tracking the object regions on consecutive frames

Temporal Tracking Method for Video
Object Segmentation Due to strong association between consecutive frames, it is possible to track the similarity of some visual properties using a reference frame.The frames within a single shot are usually inclined to share similar visual properties, which are often semantically meaningful.Similarities within consecutive frames can be defined in terms of regions and objects similarities.The comparison may be carried out on the entire frame, or limited to certain object or region of interest in each frame.In recent years, most temporal tracking methods use color homogeneity to separate multiple regions included in a single frame.An example of such color homogeneity is skin color (Wu 1998;Kjeldsen 1996).Other possible characteristics include object shapes such as human or car, single pattern objects such as the skin of animals, road pattern, etc.On the other hand, as we have mentioned before, a semantic object may contain multiple regions with different colors, textures and motions.Therefore, the use of a single color or texture for object segmentation cannot lead to a satisfactory result.Another research using motion segmentation provides a coarse mask of moving objects where the object boundary is too rough to lead to an accurate extraction.

Object Segmentation by 5-Dimensional
Feature Vectors Some constraints of video object segmentation come from complex and cluttered background, lighting perspective, deformation of object, color similarities between object and background, video sequence image noise, etc.Therefore, this paper proposes pixel-wise object-based temporal tracking aiming for accuracy in pixel level.Here, each pixel of video frame is considered as 5-dimensional data vector combining pixel position coordinates and HSV color space components.There are some reasons of using the combination of pixel position and color information.If the approach uses pixel color information only, background complexity, image noise and color similarity between the object class and background class will make problems to determine the pixels of object of interest class.If the approach uses pixel position only, the deformation of object class will make problems.Almost all semantic objects are not rigid objects.A rigid object preserves its shape and changes its position only by translation or rotation.Block matching techniques can track rigid objects.On the other hand, a non-rigid object has a tendency to change its outline whether it is moving or not.The 5-dimensional vector components have different coordinate spaces.As the geometric type of HSV color space is hexcone (or cylindrical) , we need to convert it into Cartesian coordinate space.This combination will act as one single vector (x; y; S cosH; S sinH; V ) T ............( 1) This vector should be normalized to prevent one feature domination (Hariadi et al. 2002).

LEARNING VECTOR QUANTIZATION (LVQ)
LVQ is found by Teuvo Kohonen, and it is closely related to SOM (Self Organizing Feature Maps) and classic VQ (Vector Quantization) (Kohonen 2001).While SOM and classic VQ are unsupervised clustering and learning methods, LVQ is supervised clustering and learning method.Unlike SOM, LVQ is without topological structure.It means that LVQ will only provide the information of each neuron instead of preserving the topological structure.The aim of LVQ is to define class regions decision or statistical pattern classification in the input data space.
Classical VQ tends to approximate the input data space by forming a quantized approximation to the input data vector using a finite number of codebook vector Assume that vector mc is the nearest codebook vector to vector x.To find the codebook vector mc that approximates vector x in the input space, The basic LVQ1 algorithm can be extended in such away that a different learning rate I (t) is assigned to each mi Thus, the learning process from eq. ( 3) can be express as follows:

VIDEO OBJECT SEGMENTATION SYSTEM
This section describes how the video object segmentation system works.Fig. 1 shows the flowchart of the video object segmentation system.

Semantic User Interaction
The semi-automatic type of video object segmentation incorporates user interaction for generating the semantically meaningful object of interest.The temporal tracking for semantic object at consecutive frames uses this object of interest as reference frame.In this paper, we aim to use only the first frame as the reference frame.To create the segmentation of the object of interest, we use Adobe Photoshop 6 software to cut the object of interest and separate it from its background

Codebook Vectors Initialization and Classification
Codebook vectors initialization and classification employs the object of interest created by human assistance as reference frame, followed by creating a window called the data sample window surrounding the object of interest.The data sample window is created by finding the maximum and minimum pixels position (boundary) of the object of interest class in (x,y) coordinates, and given a certain margin from those pixels' position.This window could save the computation time and to increase the density of codebook vectors quantization.Codebook vectors initialization begins with randomly distributing them in the pixel position domain inside the data sample window.The color components of the codebook vectors shall be their (H; S; V ) color information at their pixel positions.After that, create the nearest neighbor regions for each codebook vectors using eq.( 2).The classification of each codebook vector depends on the class label of the majority pixels inside its nearest neighbor region.If the number of pixels with object class label is greater than the number of pixels with background class label, then the codebook vector classification is object class codebook vector and vice versa.Refer to Fig. 2 algorithm.

Temporal Object Tracking
The codebook vectors are trained using LVQ1 algorithm eq. ( 3) and OLVQ1 algorithm for learning rate optimizer eq. ( 5).The key idea is to adjust the codebook vectors' values such that they can give the optimal class region decision between the object of interest class and the background class.The LVQ training begins by randomly selecting a set of sample input pixels inside the data sample window.Using eq. 2), the closest codebook vector to each sample pixel input vector is found sequentially.Each codebook vector winner is updated relative to each nearest input vector using LVQ1 algorithm (eq.( 3)).
For each iteration, the corresponding learning rate ® is updated using OLVQ1 (eq.( 5)).The LVQ learning process updates the codebook vectors for the succeeding frames.It is difficult to find the optimal number of iterations for convergence (Kohonen 2001), so for this implementation we employ an experimentally determined number.Codebook vectors training algorithm is in Fig. 3.

Pixel-Wise Classification
Pixel-wise classification will create the result of video object segmentation.Classification is done by finding the nearest neighbor regions for object class and background class pixel respectively.In other words, the object segmentation is created by calculating the quantization of each codebook vector, and labeling all pixels the class of the nearest codebook vector (Fig. 4).Pixels outside the data sample window will be considered as background class pixels.

EXPERIMENT
The experiment demonstrates the implementation of video object segmentation for MPEG-4 standard test video sequences.We choose 100 frames of each video sequence which content with specific characteristics of semantic objects.We also test the segmentation program using only one frame as reference (the 1 st frame of video sequence), and multiple frames as references.The LVQ algorithm uses LVQ1 with OLVQ1 as the learning rate optimizer.Experiment is run with this common parameter: Until now, it is very difficult to evaluate the accuracy of video object segmentation since it can only be observed by human eyes.Anyway, an accuracy evaluation is required to test whether the algorithm works or not.Therefore, in this video object segmentation experiment, the evaluation of segmentation accuracy is done by comparing the segmentation result of the proposed method to object segmentation using human assistance.
We do the experiment in two ways, the first one with only single reference frame, and the other one with multiple reference frames, which giving the segmentation system the user assistance every 10 consecutive frames.The accuracy of the segmentation system for each MPEG-4 test video sequences are different.
For video sequences with still background like in Foreman (Fig. 9), the video object is well segmented.The difference of error percentage between single reference and multiple reference frames are not so significant (Fig. 5).The same result is occur also in coastguard video sequence, the object is rigid, only the background is moving.See the result in (Fig. 11) and the error percentage in (Fig. 6).The Mobile video sequence (Fig. 12) which background is moving, some errors occur due to the very similar color between the moving ball and the calendar in very near position.Error percentage also increases in the last frames of Mobile video sequence due to occlusion (another object covering the moving ball).Therefore the difference of error percentage between single reference and multiple reference frame is significant at the last frame (Fig. 7).The color similarity between object and background is also occur in hall monitor.See the segmentation result in (Fig. 10) and error percentage at (Fig. 8).

CONCLUSION AND FUTURE PLANS
In this paper, the video object segmentation is developed based on Learning Vector Quantization (LVQ).The video object segmentation system use semiautomatic method, hence it requires human assistance to classify object of interest.The data input vectors are generated from the pixel value of video sequence frames.Each pixel of video frames is considered as a 5-dimensional vector which components are the combination of pixel position information in (x,y) coordinate and HSV color space (H, S, V ).The experiment results demonstrate satisfactory segmentation even if the background color similar to object of interest.Nevertheless, for some cases in MPEG-4 video sequences test, the results are not so satisfactory due to color similarity in very near position between object of interest and background pixels.
To avoid misclassification of object class due to color similarity, pattern similarity between objects, in our future research we will improve the combination of pixel position and color information in 5-dimensional vector by adding motion vector as one of the vector components.
we can use a Euclidean distance measurement: ............(2) where c is the index of the closes codebook vector mi to input vector x.Since LVQ is meant for statistical pattern classification, it constructs the VQ codebook vectors by classifying them into classes or categories.This paper employs the basic LVQ1 algorithm as follows: ......(3) It means that if the class label of the codebook vector mi matches the class label of the training sample x, then the codebook vector moves towards x.Otherwise, it moves away from the inputs sample.The other codebook vectors will remain the same where 0 < (t) < 1 is the corresponding learning rate.In this paper we employ LVQ1 with its optimization of learning rates .