Document Type : Research Paper
Authors
^{1} Professor, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, India.
^{2} M.Tech. Scholar, Dept of ECE, Sreyas Institute of Engineering and Technology, Hyderabad, India.
^{3} Associate Professor, Faculty of Engineering & Computing Sciences, Teerthanker Mahaveer University, Moradabad, U.P, India.
^{4} Professor, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, India
Abstract
Keywords
Object detection is simply about identifying and locating all known objects from digital images and videos in the field of computer vision. On the other hand, Object tracing is about monitoring a particular moving object(s) in realtime. In the field of computer vision, it has many applications i.e. human activity recognition, medical imaging, traffic monitoring, surveillance and humancomputer interaction, etc. One of the applications is to count and monitors human activities in shopping malls. This technique is mainly required while crime investigation of FBI, CBI, police, etc. to track a criminal. Initially, image/video was captured from the camera and continued at regular intervals, then it converted into frames and stored in the system using the onebit compression method. Afterward, the noise can be detected from the video sequence by the frame differencing method (Alex & Wahi, 2014) (N. Alt et al., 2010). After removing the noise, a linear Kalman filter is used to predict the object from input videos. The frame differencing method is followed by the background subtraction method. In many applications, the simplest implementation of background subtraction is used, because it can be easily implemented with low cost and without any a priori knowledge of the target objects. Even though, it has a few disadvantages (J. Cheng & S. Kang, 2007) respectively. The background subtraction method (Chien et al., 2013) takes many challenges like sudden illumination variations due to weather conditions, shadows and camera shake. This method needs more time to detect the object from the video sequence (D. Ghai et al., 2020). The remarkable thing of the linear Kalman filter algorithm is linearized based (K. Granstrom et al., 2016) and it is very effective to predict the object.
A novel method is used for detecting and tracing the object in realtime. In the proposed work, the moving object is detected by the difference between the current frame and a reference frame. Firstly the frames were collected from the background subtraction method then we used the merge sort (MS) algorithm for both the sorting and merging process. Secondly, we applied a binary search (BS) algorithm to detect the noise from sorted frames. In this process, the midposition of the sorted frame will locate. The best complexity in binary search is to identify the noise in the midposition of frames, if it is not found in the mid position then it searches the left half or after search the right half of sorted frames. Finally, the target noise can be found in the entire searching process of the frames. At last, an extended Kalman filter (EKF) is used to predict the object as well as correct it. Finally proposed work has been compared with the traditional model like Background Subtraction Frame Differencing Method (BSFD) and Binary Search (BS) algorithm with the various parameters i.e. Absolute Error, Object Tracking Error (OTE).
The remaining section of this paper is ordered as follows. Section 2 reports a literature review. Section 3 focuses on background methodology. Section 4 describes the proposed methodology. Section 5 explains the methodology. Section 6 discusses obtained experimental results and discussion and Section 7 concludes the paper.
The traditional method of detecting and tracing the object is not an ideal procedure in an outdoor environment. Existing methods were not able to detect the object due to the camera and other environmental conditions. For the outdoor environment Shadow Detection (SD) and Adaptive Threshold (AT) methodology were used and for the indoor environment (D. Gutchess et al., 2001) Mixture of Gaussian (MoG) and Median Filter (MF) techniques were used so far. The other methods are based on the iteration checking process (D. Hongde et al., 2012). Median Filter (MF) function is a timeconsuming technique and adaptive Gaussian Mixture (MoG) developed for background subtraction. During this performance, time is reduced but segmentation is marginally increased (T. Hosaka et al., 2011). An enhanced Gaussian model is projected to reduce space and time and is also used only in some surveillance applications (M. H. Hung et al., 2014). This methodology focused on five parameters i.e. background threshold (T_{s}), scaling factors, standard deviation (D), userdefined rate (α), the total number of Gaussian components (K) and the maximum number of components (M). It shows better results in terms of efficiency but it took more time for identifying the object (A. Jain et al., 2021). Methods like frame differencing and edge detection by the canny detector which can detect the edges of two nonstop frames and gets the difference between the two images. This technique has less computational time but gives false results in complicated cases (A. Jain et al., 2018). A video object tracking framework for smart cameras in surveillance networks was presented with two methods: threshold decision (TD) and diffusion distance (DD). These methods can be applied only on the smart camera side (A. Jain et al., 2015).
A background model initialization takes the input of the video sequence and describing outputs as static parts of the scene. The proposed work results were evaluated and calculated in indoor environments (I. Kartika et al., 2011). In the described work of object tracking, the grouping algorithm handles various sizes of objects and gives better results. The given work does not apply to a large number of datasets (Kim & Zu Whan, 2008). A realtime people tracker built on a POM detector using a Kalman filter. Here, computing the appearance difference in a framebyframe manner but it makes less computational efficient (K. Srikrishnaswetha et al., 2019). Multiple objects can be tracked simultaneously by using the multiple objects tracking technique. It switches a few problems of object tracking such as appearance or disappearance of objects and missing of an object but correction of the object is not mentioned (S. Kumar et al., 2020). An adaptive Kalman filter, moving objects in realtime can be tracked. It overcomes few challenges of realtime environment situations for example the limited occlusion, fastmoving stuff, longlasting occlusion, direction changes, lighting effect changes, moving object orientations and velocity changes of moving object, yet there is a need for performance bounds for extended object tracking and high on tracking time (S. Kumar et al., 2018). An elaborate overview of extended object tracking is presented. It introduced the basics of the extended Kalman filter approach – the random matrix approach and the linear Kalman filterbased approach (S. Kumar et al., 2021). The Standard Kalman filtering framework is treated by extended object tracking. An efficient closedform measurement updated by the development of compact formulas. The object tracking problem is an increase in the application of autonomous driving (S. Kumar et al., 2018). Object interpretation is one of the major techniques in any pictorial observation method, but this method is proposed by many projects (S. Kumar et al., 2019). A robust visual tracking method conveyed multitask sparse learning problem for object tracking (S. Kumar et al., 2018) (X. Li et al., 2013). The method of balltracking via Bubble Sort Algorithm tracks the ball in both normal lights as well as in low light. The computation time is high, the proposed is inefficient for large data sets. The bubble sort algorithm takes more time complexity (Z. Michalis et al., 2012). The works of similar reviews are even better, the performance comparison of EKF/UKF/CKF for tracking objects (S. Mohamed et al., 2010).
Various methods have been presented in the literature to overcome these challenges. But, still, object tracing remains the major challenge to the computer vision community. The foreground is the portion of the frame that is closest to the camera. A novel method for detecting an object is proposed.
A novel stateofart method proposed for background subtraction and contains merge sort as well as the binary search algorithm as shown in Fig. 1. Basically, in the proposed work a video is composed of a series of frames and collects N number of frames (S. Mohamed et al., 2010) (M. Murshed et al., 2010). Subsequently, sort the collected frames by using the merge sort algorithm. The merge sort algorithm performs a divide and conquers method so it is faster than another sorting method. First, sort the frames and then merge them in ascending order (H. Nenavath et al., 2018). Next, a binary search is used to detected target noise from sorted frames. The binary search method takes less time to trace objects. It can easily find out the noise position. For the next level of object detection, we use an extended Kalman filter algorithm to predict and correction of noise position (H. Patel et al., 2013) (R. Raja et al., 2020). Initially, it will predict the moving object's position, and then correct the position of the moving object. The extended Kalman filter (EKF) is a nonlinear version of the Kalman filter which is linearized based and describes an estimate of the current mean and covariance.
Fig 1. Flow Chart of Proposed Work
Background subtraction is a commonly used method for tracing moving objects in videos through the difference between the background and the input image (C. Santiago et al., 2010). This technique is separating foreground elements from the background. From the video, a moving object is noise to the background signal (S. Shaik et al., 2014). Consequently, each frame of the video can be observed noise in the background. Hence, the following method can be used:
(1)
Where, n= noise measurement of background signal
m= moving object of background signal
b= background signal.
Here, n is the noise realization of each frame.
We refer to the ith frame of the video as vi. Further, we take on that the video has N frames.
Consider b is constant (background signal), further N frames of videos follows
(2) We defined column vector then,
(3)
The background doesn’t change all frames so we subtract the background of each frame there is b’. Use a onebit compressed method to store the noise and improve the speed of tracking.
(4)
Here,
qi= qualified measurement of bi
τi= threshold of measurement.
Now, we can easily find the center of the moving object by the binary image of each frame.
Choose a large area of the object. Store its position of the moving object at each frame.
The sorting and merging process is a Recursive procedure and this is a divided conquer procedure (S. Sharma et al., 2019) (N. Singla et al., 2014). The divided conquer procedure strategy says that if the problem is large then break the problem into subproblems and if the problem becomes small then solve it, then combine the solution of small problem or subproblems to get the solution for the main problem. Single element there is a small or subproblem. Each and every single element is sorted by subproblems. In this method applied in our section, assign a position of each frame, and arrange the frames in ascending order (large to small). This method takes less time. Finally, merge all frames in sequence order. Eventually, Fig. 2 refers, capturing an image from the video sequence at the same time assign value at each frame. Then collect the N number of frames and find the mid position of frames. Next, divide into two parts such as the left half and the right half. Equally, sort both parts and combined them in ascending order. After the combination, we will get the sorted frames. This technique is much better than others because the time complexity of the merge sort algorithm denotes, .
Fig. 2. Block Diagram of Merge sort.
The time complexity is calculated by,
Merge Sort T(n)
The time complexity of merge sort,
(5) Substitute (n/2) in place of n in equation (5)
(6)
Substitute equation (6) in equation (5)
(7)
Substitute equation (n/4) in equation (5)
(8)
Substitute equation (8) in equation (7)
(9)
(10)
And so on
(11)
Let
(12)
Take on both sides
(13)
(14)
Binary search is a half interval search or logarithmic search (N. Singla et al., 2014) (Soumya & S. Kumar, 2018). It is to find the position of the targeted item from the sorted list of items. It compares the targeted item to the midposition of the sorted item. It is an efficient algorithm. It checks by repeatedly dividing in half the portion of the sorted item until to find the possible location just one (Z. Tang & Z. Miao, 2007). This algorithm applied our method, to find the noise from the sorted frames. Refer (Fig. 3).
Fig. 3. Block Diagram of Binary Search.
First, check the sorted frame then, find the midposition of the sorted frame and starts with the middle element (S. K. Weng et al., 2006). To find the noise is present in the mid position. If it is found in the midposition of frames hence the time complexity of the binary search is best in this case. Another hand, the target noise is not found in the mid position. Then go through the next case of searching which means search left half of sorted frames or right half of sorted frames. This process will continue until the noise is found at mid position. Finally, the target noise should found in the mid position of the sorted array. Therefore, the binary search is the fastest and most effective searching technique. The best time complexity T(n) of the binary search is O(1). (i.e.) If noise in mid position.
The time complexity of the binary search is calculated by,
Binary Search T(n)
Let time complexity of Binary Search,
(15)
Substitute (n/2) equation (15)
(16)
Substitute equation (16) in equation (15)
(17)
(18)
Substitute equation (18) in equation (17)
(19)
And so on
At some point, we reach only one element
(20)
Take on both sides,
(21)
Substitute (21) in (20)
(22)
O  Remove all constants,
(23)
(24)
The Nonlinear version of the Kalman filter is called the extended Kalman filter (EKF) which linearizes based and on an estimate of the current mean and covariance as shown in Fig. 4.The Kalman Filter produces the average prediction of the formal and the new measurements (S. Yang & Baum, 2017). It performed recursively. It accesses the average prediction that measures relevant data. The Kalman Filter is designed for a linear discretetime dynamical system and the Extended Kalman Filter is designed for a nonlinear discretetime dynamical system (C. Zhan et al., 2007). These are the most common differences between both LKF and EKF. The Extended Kalman filter (EKF) is the continuous linearized estimation state before applying the estimation computations.
Fig. 4. Block Diagram of an Extended Kalman Filter.
The system measurement equations in a discretetime are followed by,
a_{k}= f_{k1 }(a_{k1 }, v_{k1}_{k1}) (25)
b_{k} = l_{k}(a_{k} , u_{k}) (26)
with respect to, c_{k}and u_{k}are not correlated zero.
The nonlinear models for EKF are followed by, (Fig. 4).
Initial estimate –a_{k/k}_{1 }and Q_{k/k}
Prediction Time Update:
Project the stage ahead,
_{k+1/k }_{k/k },v_{k}_{ }
Project the error covariance ahead
Q_{k+1/k }= F_{k}P_{k/k}B_{k}R_{k}
Measurement and Update:
Calculate the Kalman Gain
L_{k}= Q_{k/k1 }H_{k}(H_{k}Q_{k/k1}+ R_{k})^{1}
Update Estimate with Measurement z(k)
_{k/k}= _{k/k1 }+ L_{k}[b_{k}l_{k}(_{k/k1})]
Update Error Covariance
Q_{k/k }= (I –L_{k}H_{k}) Q_{k/k1.}
Case 1. Tracking small size ball
The experiment is carried out in a video sequence having a small moving ball. Detection and tracking are performed by segmenting the video into numerous frames (Z. Zivkovic, 2004) (S. Kumar et al., 2021). To prove the performance of this technique, Object Tracking Error (OTE) and Absolute Error (AE) are calculated for each frame. The results obtained by applying an Extended Kalman filter is compared with Linear Kalman filter results. The comparison of LKF and EKF for small size balls are given in Table 1. Table 1 and Fig. 5, refer to the value of LKF and EKF for the small size of a ball. The novel work of this project is the EKF values of Absolute Error (AE) and Object Tracking Error (OTE) in each frame is 0.12 and 0.14 but the EKF values are 0.10 and 0.01 for small size ball. These graphs denoted that the function of object tracking and the difference between the linear and extended Kalman filter functions in Absolute Error (AE) and Object Tracking Error (OTE).
Table 1. Comparison of LKF and EKF values for different sizes of the ball.
Parameters Comparison 

Parameters 
Small Size Ball 
Medium Size Ball 
Big Size Ball 

LKF 
EKF 
LKF 
EKF 
LKF 
EKF 

AE 
0.12 
0.10 
0.21 
0.15 
0.63 
0.50 
OTE 
0.04 
0.01 
0.05 
0.03 
0.29 
0.18 
Fig. 5. Comparison of LKF and EKF values for the small size of the ball
Case 2. Tracking medium size ball
The experiment is carried out in a video sequence, having a mediumsized moving ball. For performance analysis, Object Tracking Error (OTE) and Absolute Error (AE) are calculated at each frame. The comparison of LKF and EKF for medium size balls are given in Table 1. The LKF values of each frame for the medium ball Absolute Error (AE) and Object Tracking Error (OTE) are calculated and results are shown in Fig. 6. The novel work of this project is the EKF values of Absolute Error (AE) and Object Tracking Error (OTE) in each frame is 0.21 and 0.05 but the EKF values are 0.15 and 0.03 for medium size ball. Fig. 6 denotes the difference between the LKF and EKF of AE and OTE.
Fig. 6. Comparison of LKF and EKF values for medium size ball
Case 3. Tracking big size ball
The experiment is carried out in a video sequence, having a big sized moving ball. For performance analysis, Object Tracking Error (OTE) and Absolute Error (AE) are calculated for each frame. The comparison of LKF and EKF for big size balls are given in Table 1 and results are shown in Fig. 6. The novel work of this project is the EKF values of Absolute Error (AE) and Object Tracking Error (OTE) in each frame is 0.63 and 0.29 but the EKF values are 0.50 and 0.18 for medium size ball. The graph denotes the difference between the LKF and EKF of AE and OTE.
Fig 7. Comparison of LKF and EKF values for large size ball
Fig. 8 shows the test results of moving object tracking. We captured a few images from the video when the run time of this project. The noise is noticed by the white mask. It means the noise is detected (a) by the white mark. Continued tracking process, we put the blue circle to noise for correction (b) and a green dot for prediction. (c) It denotes the corrected part in this process. After, the final image (c) of the project gives both prediction and tracking functions.
Fig. 8: Object Tracking Frame Results
Fig. 9 describes the time complexity of all algorithms for instance Liner Kalman Filter (LKF), Linear Kalman Filter via Bubble Sort (LKFBS), Extended Kalman Filter (EKF), and Extended Kalman Filter via Binary Search (EKFBS). The explanation of the bar chart is EKF and EKFBS time complexity is more efficient than LKF and LKFBS. The approximate running time of the EKFBS value is 0.2 in the proposed work. To compare, this method takes less time for tracking moving objects than other algorithms. The xaxis denotes the time taken concerning the process and the yaxis denotes the algorithms.
In this paper, we introduced a novel technique of identifying and tracking a moving object in a video sequence by using an Extended Kalman Filter via Binary Search Algorithm. In previous work, a moving object was detected by using the background subtraction method and tracked by a linear Kalman filter algorithm. But, this time computation of the project is high for detecting a moving object and it is a linearized system. So, our proposed method of object detection works on the Binary Search algorithm and the object tracking performs on an extended Kalman filter. As we saw the experimental results, case 1, case 2, and 3 refer to tracking the moving object under different sizes of the ball. It shows the EKF is more effective than LKF in the nonlinear version. The time complexity of object detection and tracking is much efficient in this project.
Conflict of interest
The authors declare no potential conflict of interest regarding the publication of this work. In addition, the ethical issues including plagiarism, informed consent, misconduct, data fabrication and, or falsification, double publication and, or submission, and redundancy have been completely witnessed by the authors.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.