US8503725B2 - Vehicle tracking system and tracking method thereof - Google Patents
Vehicle tracking system and tracking method thereof Download PDFInfo
- Publication number
- US8503725B2 US8503725B2 US12/905,576 US90557610A US8503725B2 US 8503725 B2 US8503725 B2 US 8503725B2 US 90557610 A US90557610 A US 90557610A US 8503725 B2 US8503725 B2 US 8503725B2
- Authority
- US
- United States
- Prior art keywords
- objects
- lamp
- vehicle
- threshold value
- bright
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 138
- 238000001514 detection method Methods 0.000 claims abstract description 57
- 238000002372 labelling Methods 0.000 claims abstract description 32
- 230000011218 segmentation Effects 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 description 53
- 230000033001 locomotion Effects 0.000 description 24
- 230000001427 coherent effect Effects 0.000 description 7
- 230000007613 environmental effect Effects 0.000 description 7
- 238000005286 illumination Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 206010052128 Glare Diseases 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000004313 glare Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 230000002459 sustained effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the present invention generally relates to a vehicle tracking system and a vehicle tracking method, and more particularly to a vehicle tracking system and method used for tracking a vehicle at nighttime.
- FIG. 1 shows typical nighttime traffic scenes from an urban road and highway under different environmental illumination conditions.
- FIG. 2 the difference between two successive images is used for obtaining a moving profile, and the moving profile is still primarily based on the lamp.
- FIG. 3 the background of an image can be obtained by background convergence, and the difference between the original image and the background is used for detecting a foreground object, wherein an image with a higher setting minus the threshold value is used to extract the characteristics of an object which is basically a lamp.
- a lamp is a major characteristic of the road environment at nights, regardless of which method is used for extracting the object. Therefore, it is very important to provide a vehicle tracking system that uses a lamp as a basis to overcome the technical issue of the conventional vehicle detection technology that cannot be operated effectively at night time.
- the present invention provides a vehicle tracking method, comprising the steps of: capturing a plurality of bright objects from an image by the bright object segmentation; labeling coordinates of the plurality of bright objects by a connected component object labeling method to form a plurality of connected component objects; identifying, analyzing and combining characteristics of the plurality of connected component objects by the bright object recognition to form a plurality of lamp objects; and identifying the type of a vehicle having the plurality of lamp objects by the vehicle detection/recognition, and counting the number of various vehicles.
- the image is a grey-scale image
- the bright object segmentation determines a plurality of threshold values by a grey scale statistical chart of the grey-scale image.
- the bright object segmentation further segments the image to form the plurality of bright objects after objects with the same nature and similar characteristics in the grey-scale image are determined according to the plurality of threshold values.
- the plurality of lanes in the image defines a detection area
- the connected component object labeling method includes a coarse scan and a fine scan for labeling the plurality of adjacent bright objects as the same object by a connected component labeling method to form the plurality of connected component objects.
- the bright object recognition respectively compares values of aspect ratio, area and density of the plurality of connected component objects with a maximum aspect ratio threshold value, a minimum aspect ratio threshold value, a maximum area threshold value, a minimum area threshold value and a density critical threshold value to determine the characteristics of the plurality of connected component objects to capture the plurality of lamp objects. If the bright object recognition determines that any paired connected component objects have a horizontal distance and a vertical distance smaller than a horizontal distance threshold value and a vertical distance threshold value respectively, then the paired connected component objects are combined into a lamp object.
- the multi-vehicle tracking method tracks the projectory of the plurality of lamp objects. If the values of area, width and horizontal distance of any paired lamp objects match a lamp area threshold value, a lamp width threshold value and a lamp horizontal distance threshold value respectively, the multi-vehicle tracking method combines the paired lamp objects into one of the lamp objects, and tracks the projectory of the lamp object.
- the vehicle detection/recognition compares the aspect ratio of the plurality of lamp objects with the aspect ratio threshold value of a motorcycle and the aspect ratio threshold value of an automobile, and determines the type of a vehicle having the plurality of lamp objects according a comparison result.
- the present invention further provides a vehicle tracking system comprising an image capture module, a display module and a processing module.
- the image capture module is provided for capturing an image; the display module is provided for displaying the image.
- the processing module comprises a bright object segmentation unit, a connected component object labeling unit, a bright object identifying unit, a multi-vehicle tracking unit, and a vehicle identifying unit.
- the bright object segmentation unit is provided for capturing a plurality of bright objects from an image; the connected component object labeling unit is provided for labeling coordinates of the plurality of bright objects to form a plurality of connected component objects; the bright object identifying unit is provided for identifying, analyzing and combining characteristics of the plurality of connected component objects; the multi-vehicle tracking unit is provided for tracking the trajectory of the plurality of lamp objects; the vehicle identifying unit is provided for identifying the type of a vehicle having the plurality of lamp objects.
- the processing module further counts the number of various vehicles and controls the number of various vehicles displayed by the display module.
- the image is a grey-scale image
- the bright object segmentation unit determines a plurality of threshold values by a grey scale statistical chart of the grey-scale image.
- the bright object segmentation unit further segments the image to form the plurality of bright objects after objects of the same nature and similar characteristics in the grey-scale image are determined according to the plurality of threshold values.
- the plurality of lanes in the image defines a detection area
- the connected component object labeling unit includes a coarse scan and a fine scan for labeling the plurality of adjacent bright objects as the same object by a connected component labeling unit to form the plurality of connected component objects.
- the bright object identifying unit respectively compares the values of aspect ratio, area and density of the plurality of connected component objects with a maximum aspect ratio threshold value, a minimum aspect ratio threshold value, a maximum area threshold value, a minimum area threshold value and a density critical threshold value to determine the characteristics of the plurality of connected component objects to capture the plurality of lamp objects.
- the bright object recognition determines that any two of the connected component objects have a horizontal distance and a vertical distance smaller than a horizontal distance threshold value and a vertical distance threshold value respectively, then the two connected component objects are combined into a lamp object.
- the multi-vehicle tracking unit tracks the projectory of the plurality of lamp objects; and if the values of area, width and horizontal distance of any paired lamp objects match a lamp area threshold value, a lamp width threshold value and a lamp horizontal distance threshold value respectively, the multi-vehicle tracking method combines the paired lamp objects into one of the lamp objects, and tracks the projectory of the lamp object.
- the vehicle identifying unit compares the aspect ratio of the plurality of lamp objects with a motorcycle aspect ratio threshold value and an automobile aspect ratio threshold value, and determines the type of a vehicle having the plurality of lamp objects according a comparison result.
- the vehicle tracking system and method of the present invention have one or more of the following advantages:
- the vehicle tracking system and method can segment the lamp image for the image processing in order to improve the accuracy of detecting the vehicles at night time.
- the vehicle tracking system and method can track many lamps by using a single lamp as a basis. Additionally, the proposed method can overcome the difficulty of identifying the vehicles at night time.
- FIG. 1 is a schematic view of a conventional way of detecting vehicles by edges
- FIG. 2 is a schematic view of a conventional way of detecting vehicles by moving profiles
- FIG. 3 is a schematic view of a conventional way of detecting vehicles by background convergence
- FIG. 4 is a flow chart of a vehicle tracking method of the present invention.
- FIG. 5 is a schematic view which illustrates the bright object segmentation of a vehicle tracking method in accordance with a first preferred embodiment of the present invention
- FIG. 6 is a schematic view which illustrates the bright object segmentation of a vehicle tracking method in accordance with a second preferred embodiment of the present invention.
- FIG. 7 is a schematic view of a detecting area in a vehicle tracking method of the present invention.
- FIG. 8 is a schematic view which implements a connected component object labeling method in a vehicle tracking method of the present invention.
- FIG. 9 is a schematic view which implements the bright object recognition in a vehicle tracking method of the present invention.
- FIG. 10 is a schematic view which combines connected component objects in the bright object recognition of a vehicle tracking method of the present invention.
- FIG. 11 is a schematic view which eliminates ground reflection in the bright object recognition of a vehicle tracking method in accordance with a preferred embodiment of the present invention.
- FIG. 12 is a schematic view of a multi-vehicle tracking method of a vehicle tracking method in accordance with a first preferred embodiment of the present invention.
- FIG. 13 is a schematic view of a multi-vehicle tracking method of a vehicle tracking method in accordance with a second preferred embodiment of the present invention.
- FIG. 14 is a schematic view which combines paired lamp objects into a single lamp object in a multi-vehicle tracking method of a vehicle tracking method in accordance with the present invention
- FIG. 15 is a schematic view of tracked potential vehicle components of moving cars with symmetric headlight pairs in a multi-vehicle tracking method of a vehicle tracking method in accordance with a first preferred embodiment of the present invention
- FIG. 16 is a schematic view of tracked potential vehicle components of moving cars with symmetric headlight pairs in a multi-vehicle tracking method of a vehicle tracking method in accordance with a second preferred embodiment of the present invention
- FIG. 17 is a schematic view of the error correction in an a multi-vehicle tracking method of a vehicle tracking method in accordance with the present invention.
- FIG. 18A is a schematic view of tracking large-sized vehicles in a multi-vehicle tracking method of a vehicle tracking method in accordance with the present invention.
- FIG. 18B is a schematic view of tracking small-sized vehicles in a multi-vehicle tracking method of a vehicle tracking method in accordance with the present invention.
- FIG. 18C is a schematic view of tracking motorcycles in a multi-vehicle tracking method of a vehicle tracking method in accordance with the present invention.
- FIG. 19 is a block diagram of a vehicle tracking system of the present invention.
- FIG. 20 is a schematic view of a vehicle tracking system and method in accordance with the present invention.
- FIG. 21 is a schematic view which applies the vehicle tracking system and method at the junction of Ci Yun Road of Hsinchu in accordance with the present invention.
- FIG. 22 is a schematic view which applies the vehicle tracking system and method at the intersection of Chien Kuo South Road and Zhongxiao East Road flyover of Taipei in accordance with the present invention.
- FIG. 23 is a schematic view of applying the vehicle tracking system and method in the section of Kuang Fu Road of Hsinchu in accordance with the present invention.
- FIG. 24 illustrates the image coordinate system used for vehicle detection.
- FIG. 25 illustrates the motion-based grouping process on the vehicle component tracks.
- the vehicle tracking method comprises the steps of: (S 10 ) capturing a plurality of bright objects from an image by the bright object segmentation; (S 20 ) labeling coordinates of the plurality of bright objects by a connected component object labeling method to form a plurality of connected component objects; (S 30 ) identifying, analyzing and combining the characteristics of the plurality of connected component objects by the bright object recognition to form a plurality of lamp objects; (S 40 ) tracking a trajectory of the plurality of lamp objects by a multi-vehicle tracking method; (S 50 ) identifying the type of a vehicle having the plurality of lamp objects by the vehicle detection/recognition, and counting the number of various vehicles.
- the image is a grey-scale image (as shown on the left side of FIG. 5 ), and the bright object segmentation determines a plurality of threshold values through a grey scale statistical chart (as shown on the right side of FIG. 5 ) of the grey-scale image.
- the bright object segmentation further segments the image into bright objects (as shown on the right side of FIG. 6 ) after the objects (as shown on the left side of FIG. 6 ) with same nature and similar characteristics in the grey-scale image are determined according to the threshold values.
- the image includes a plurality of lanes, and the lanes define a detection area (as shown in FIG. 7 ).
- the connected component object labeling method includes a coarse scan and a fine scan for labeling a plurality of adjacent bright objects as the same object by a connected component labeling method to form a plurality of connected component objects (as shown in FIG. 8 ).
- the present invention discloses a fast bright-object segmentation process based on automatic multilevel histogram thresholding.
- the proposed method extracts the bright object pixels of moving vehicles from image sequences of nighttime traffic scenes.
- the first step in the bright object extraction process is to extract bright objects from the road image to facilitate subsequent rule-based classification and tracking processes.
- the present invention first extracts the grayscale image, i.e. the Y-channel, of the grabbed image by performing a RGB to Y transformation.
- the pixels of bright objects must be separated from other object pixels of different illuminations.
- the present invention discloses a fast effective multilevel thresholding technique. In the preferred embodiments, this effective multilevel thresholding technique is applied to automatically determine the appropriate levels of segmentation for extracting bright object regions from traffic-scene image sequences.
- the lighting object regions of moving vehicles can be efficiently and adaptively segmented under various environmental illumination conditions in different nighttime traffic scenes as shown on the left part of FIGS. 1 to 3 .
- lighting objects can be appropriately extracted from other objects contained in nighttime traffic scenes.
- performing this lighting object segmentation process successfully separates the lighting objects of interest on the left part of FIGS. 1 to 3 into thresholded object planes under different environmental illumination conditions in nighttime traffic scenes.
- the connected-component extraction process can be performed to label and locate the connected-components of the bright objects. Extracting the connected-components reveals the meaningful features of location, dimension, and pixel distribution associated with each connected-component.
- the location and dimension of a connected-component can be represented by the bounding box surrounding it.
- a detection area is applied for each traffic scene. This detection area is the midline of the traffic scene image, and bounded by the most left and right lanes, as shown in FIG. 7 . These lane boundaries were determined by performing a lane detection process in the system initialization. The connected-component extraction and spatial classification processes are only performed on the bright objects located in the detection area, as shown in FIG. 6 .
- FIG. 9 is a schematic view of implementing the bright object recognition in a vehicle tracking method according to the present invention.
- the bright object recognition compares the values of aspect ratio, area and density of the connected component object with a maximum aspect ratio threshold value, a minimum aspect ratio threshold value, a maximum area threshold value, a minimum area threshold value and a density critical threshold value to determine the characteristics of the connected component object to capture a plurality of lamp objects.
- the bright object recognition determines that the values of the horizontal distance and vertical distance of any paired connected component objects are smaller than a horizontal distance threshold value and a vertical distance threshold value respectively, then the paired connected component objects are combined to form a lamp object (as shown in FIG. 10 ).
- the values of the horizontal distance and vertical distance of any paired connected component objects are smaller than a horizontal distance threshold value and a vertical distance threshold value respectively, then one of the paired connected component objects is deleted (as shown in FIG. 11 ).
- the bright connected components and their groups are firstly defined as follows:
- C i denotes the ith lighting component to be processed.
- the locations of a certain component C i employed in the spatial classification process are their top, bottom, left and right coordinates, denoted as t c i , b c i , l c i , and r c i , respectively.
- the width and height of a bright component C i are denoted as W(C i ) and H(C i ), respectively.
- FIG. 24 illustrates the image coordinate system used for vehicle detection.
- the vehicles located at a relatively distant place on the road will appear in a higher location and become progressively smaller until converging into a vanishing point. Therefore, the driving lanes stretched from the vanishing point can be modeled by a set of line equations by,
- the driving lanes are obtained by using the lane detection method of our in the system initialization process.
- a preliminary classification procedure can be applied to the obtained bright components to identify potential vehicle light components and filter out most non-vehicle illuminant light components, such as large ground reflectors and beams.
- a bright component C i is identified as a potential vehicle light component if it satisfies the following conditions:
- the enclosing bounding box of a potential vehicle light component should form a square shape, i.e. the size-ratio feature of C i must satisfy the following condition: ⁇ RL ⁇ W ( C i )/ H ( C i ) ⁇ RH (7)
- thresholds ⁇ RL and ⁇ RH for the size-ratio condition are set as 0.8 and 1.2, respectively, to determine the circular-shaped appearance of a potential vehicle light.
- a vehicle light object should also have a reasonable area compared to the area of the lane.
- the area feature of C 1 must satisfy the following condition: ⁇ AL ⁇ A ( C i ) ⁇ AH (8)
- threshold T hp is chosen as 0.6 to reflect the vertical alignment characteristics of compound vehicle lights.
- FIGS. 9 to 11 illustrate the results of the spatial clustering process.
- This process yields several sets of potential vehicle components CSs in the detection area, and these are labeled as P in the following tracking processes.
- P potential vehicle components
- FIG. 10 shows that its meaningful light components are preliminarily refined and grouped into sets of potential vehicle components, in which the light components of the bottom-right car are grouped into two potential vehicle component sets.
- This stage also filters out some non-vehicle bright components, such as reflected beams on the ground.
- FIG. 11 illustrates another sample of the spatial clustering process of bright components, in which the reflections of the headlights of the bottom-right car are excluded from the resulting potential vehicle component sets.
- the current stage does not yet merge the vehicle light sets on the two sides of the vehicle body into paired groups. This is because vehicles, which have paired light sets, and motorbikes, which have single light sets, both exist in most nighttime road scenes. Therefore, without motion information in the subsequent frames, it is difficult to determine if the approaching light sets represent paired lights belonging to the same vehicle.
- the vehicle light tracking and identification process described in the following section is applied to these potential vehicle light sets to identify actual moving vehicles and motorbikes.
- FIG. 12 is a schematic view of a multi-vehicle tracking method of a vehicle tracking method in accordance with a first preferred embodiment of the present invention.
- the multi-vehicle tracking method tracks the projectory of a lamp object.
- a single lamp object is used as a basis for tracking a vehicle, and a lamp object is labeled in the images of successive screens.
- the information of the lamp objects including the traveling direction and position are also tracked and detected to precisely determine the moving direction of each vehicle entering into the screen.
- the way of using a single lamp object as a basis to track a vehicle can be further used for detecting and tracking a motorcycle (as shown in the right side of FIG. 12 ) or a vehicle having a single lamp (as shown in FIG. 13 ).
- a tracker When a potential vehicle component is initially detected in the detection area, a tracker will be created to associate this potential vehicle component with those in subsequent frames based on spatial-temporal features.
- the features used in the tracking process are described and defined as follows:
- P i t denotes the i th potential vehicle component appearing in the detection zone in frame t.
- the location of P i t employed in the tracking process is represented by its central position, which can be expressed by,
- the overlapping score of the two potential vehicle components P i t and P i t′ , detected at two different times t and t′, can be computed using their area of intersection:
- a potential vehicle component might be in one of three possible tracking states.
- the component tracking process applies different relevant operations according to the given states of each tracked potential vehicle component in each frame.
- the tracking states and associated operations for the tracked potential vehicle components are as follows:
- ⁇ mp is a predefined threshold that represents the reasonable spatial-temporal coherence for P i t to be associated with TP j t-1 .
- An existing tracker of potential vehicle component TP j t-1 ⁇ TP t-1 cannot be matched by any newly coming potential vehicle components P i t ⁇ P t .
- a tracked potential vehicle component may sometimes be temporarily sheltered or occluded in some frames, and will soon re-appear in subsequent frames.
- this vehicle component is retained for a span of 0.5 seconds, i.e. 0.5 FPS frames, where FPS denotes the grabbing frame rate (frames per second) of the CCD camera, to appropriately cope with vehicles leaving straightforward or making turns.
- FIGS. 12 and 13 show that, after performing the component tracking process, the potential vehicle components entering the detection area, including cars and motorbikes with different amounts of vehicle lights, are tracked accordingly. These potential component tracks are then analyzed and associated by the following motion-based grouping process.
- FIG. 14 is a schematic view of combining paired lamp objects into one of the lamp objects in a multi-vehicle tracking method of a vehicle tracking method in accordance with the present invention.
- a process is performed to determine whether or not the vehicles are the same vehicle in order to enter into the combining process: if the values of area, width and horizontal distance of any two of the lamp objects match a lamp area threshold value, a lamp width threshold value and a lamp horizontal distance threshold value, then the multi-vehicle tracking method will combine the paired lamp objects into one of the lamp objects, and track the projectory of the lamp object.
- the subsequent motion-based grouping process groups potential vehicle components belonging to the same vehicles. For this purpose, potential vehicle components with rigidly similar motions in successive frames are grouped into a single vehicle.
- the pairing tracks of nearby potential vehicle components TP i t and TP j t are determined to belong to the same vehicle if they continue to move coherently and reveal homogeneous features for a period of time.
- the coherent motion of vehicle components can be determined by the following coherent motion conditions:
- spatial motion coherence can be determined by the following spatial coherence criterion, including,
- t 0, L, n ⁇ 1, n is also determined to be the frames of a duration of 0.5 seconds (i.e. 0.5 ⁇ FPS frames), to properly reflect the sufficient sustained time of their coherent motion information in most traffic flow conditions, including free-flowing and congestion cases.
- T h is chosen to be 0.6.
- FIG. 25 illustrates the motion-based grouping process on the vehicle component tracks.
- two headlights of a white car are firstly detected as two potential vehicle components after upon entering the detection area (as shown on the left side of FIG. 25 ).
- Two separate trackers for these two potential vehicle components are then created (as shown on the center of FIG. 25 ), and they are accordingly grouped after they continue to move coherently for a period of time (as shown on the right side of FIG. 25 ).
- one larger headlight of the following car on the same lane is just detected as a potential vehicle component and tracked.
- the headlight pair of this car will subsequently be detected, tracked, and grouped as the subsequent car (as depicted in FIG. 18A ).
- FIGS. 15 and 16 are schematic views of tracking a lamp object trajectory in a multi-vehicle tracking method of a vehicle tracking method in accordance with the first and second preferred embodiment of the present invention respectively.
- the car at the front may block the lamp of the car that follows.
- a critical line is created in the image, and the y-axis coordinate is set to 200 pixels, and the starting point is set at the upper left corner.
- No compensation is required if the coordinates of the paired connected component objects are smaller than the critical line, and the lamp object is deleted.
- another lamp object will be captured again when the following vehicle is moving (as shown in FIG. 15 ).
- the paired lamp object is determined to be tracked in a series of images for a period of time. Therefore, a paired connected component object can be used to update the coordinates of the paired lamp object, and the characteristics of the tracked paired lamp object are computed again (as shown in FIG. 16 ). In addition, if the paired lamp object is determined to be leaving the detection area soon, the paired lamp object will be deleted, and the number of vehicles will be counted.
- the reflection of a car body and the glare of a road surface will form a single tracking lamp object and cause a wrong detection.
- the present invention makes use of the shape of the lamp object to simulate a virtual frame of the car body.
- the single lamp object exists within the virtual frame, then such lamp object will be treated as a noise and deleted.
- FIGS. 18A , 18 B, and 18 C are schematic views of tracking various different vehicles in a multi-vehicle tracking method in accordance with the present invention respectively.
- the present invention tracks a single lamp object, and then uses the projectory of the single lamp to perform the process of combining the lamp objects.
- the multi-vehicle tracking method then enters into the process of tracking the lamp objects, so that the processes of this series can detect both automobile and motorcycle.
- the segmentation process and the motion-based grouping process can cause some occlusion problems, such as (1) two vehicles simultaneously moving parallel on the same lane are too close to each other (especially large vehicles, such as busses vans or lorries, parallel moving with nearby motorbikes), they may be occluded for a while, because this may not be completely avoided in the spatial coherence criterion based on the lane information during the motion-based grouping process; and (2) some large vehicles may have multiple light pairs, and therefore may not be immediately merged into single groups during the motion-based grouping process.
- the component group tracking process can update the position, motion, and dimensions of each potential vehicle. This process progressively refines the detection results of potential vehicles using spatial-temporal information in sequential frames. This subsection describes the tracking process for component groups of potential vehicles, which handles the above-mentioned occlusion problems.
- each tracked component group of a potential vehicle in the current frame t will be preliminarily estimated by an adaptive search window based on motion information from the previous frame.
- ⁇ x k t-1 , ⁇ y k t-1 ⁇ indicates the Euclidian distance between TG k t-1 and TG k t-2 .
- the center of the search window of a tracked potential vehicle in the current frame can then be determined as (w 1 ⁇ C X (TG k t-1 ), w 2 ⁇ C Y (TG k t-1 )), and its width and height can be defined as 1.5 ⁇ W(TG k t-1 ) and 3 ⁇ H(TG k t-1 ), respectively.
- a tracked component group TG k t appearing in the search window may be in one of four possible states associated with its own component tracks TP i t , . . . , TP i+n t .
- This potential vehicle tracking process conducts different operations according to the current state of TG k t :
- the vehicle tracker then updates the component group TG k t of a potential vehicle to include the renewed group of TP i ⁇ t , . . . , TP i ⁇ +n t .
- the threshold ⁇ mg reflects a reasonable spatial-temporal coherence confirmation for TP i ⁇ t , . . . , TP i ⁇ +n t to be continuously associated with the same group as TG k t-1 .
- ⁇ mg should be reasonably firmer than the value of tracker matching criterion parameter ⁇ mp in Eq. (15).
- the tracks of newly-appearing or split components are not matched with TG k t-1 , and the motion-based grouping process (Eqs. (16)-(18)) will be applied to these non-matched component tracks to determine if they have coherent motion property with TG k t-1 .
- the component tracks that having coherent motion will be assigned to the updated TG k t , and the others will be detached as orphan component tracks.
- FIG. 18A presents examples of the potential vehicles analyzed by the component group tracking process.
- two headlights of a bus are firstly detected and tracked as two separate potential vehicle components after entering the detection area (as on the left side of FIG. 18A ). They are then merged into a component group by the motion-based grouping process (as on the center of FIG. 18A ), and its component group is accordingly tracked as a potential vehicle (as shown on the right side of FIG. 18A ).
- the potential vehicles are tracked for a certain time, the following verification and classification process is performed on these tracked potential vehicles to identify the actual vehicles and their associated types.
- the vehicle detection/recognition compares the aspect ratio of a lamp object or a set of lamp object pairs with the aspect ratio threshold value of an automobile and a motorcycle. The vehicle detection/recognition also determines the type of a vehicle which includes the lamp object according to the comparison result.
- the aspect ratio threshold value of an automobile and a motorcycle can be 2.0 and 0.8, respectively.
- the lamp must exist on the road in successive images, so that if the lamp object or lamp objects are overlapped continuously for more than 10 times, then the lamp object and the set of lamp objects are considered as candidate motorcycles and automobiles. If the coordinates exceed the range of the image, then the lamp object and lamp objects will be identified as a motorcycle or an automobile and the number of motorcycles or automobiles is counted.
- a motorbike usually appears as a single, and nearly square-shaped or vertical rectangular-shaped lighting component in nighttime traffic scenes.
- a single tracked component TP i t which has not been associated to any component groups and been consistently and alone tracked by the vehicle component tracking process for a significant span of more than 1 second, i.e. 1.0 ⁇ FPS frames, can be identified as a moving motorbike candidate.
- the size-ratio feature of its enclosing bounding box should reflect a square or vertical rectangular shape, and should satisfy the following discriminating rule: ⁇ m1 ⁇ W ( TP i t )/ H ( TP i t ) ⁇ m2 (22)
- threshold ⁇ m1 and ⁇ m2 on the size-ratio condition are selected as 0.6 and 1.2, respectively, to suitably identify the shape appearance characteristic of the motorbikes, which are obviously different from those of the cars.
- the above-mentioned discriminating rules can be obtained by analyzing many experimental videos of real nighttime traffic environments, in which vehicle lights appear in different shapes and sizes, and move in different directions at different distances.
- the thresholds values utilized for these discriminating rules were determined to yield good performance in most general cases of nighttime traffic scenes.
- a tracked component group or single potential component of a potential vehicle will be identified and classified as an actual car or a motorbike based on the above-mentioned vehicle classification rules.
- the count of its associated vehicle type is then incremented and recorded to update the traffic flow information.
- each detected vehicle is guaranteed to be counted once, and the redundant counting of vehicles can be efficiently avoided.
- the vehicle tracking system 1 (DSP-based real-time system) comprises an image capture module 10 , a display module 11 and a processing module 12 .
- the image capture module 10 is provided for capturing an image 2
- the display module 11 is provided for displaying the image 2 .
- the processing module 12 comprises a bright object segmentation unit 120 , a connected component object labeling unit 121 , a bright object identifying unit 122 , a multi-vehicle tracking unit 123 and a vehicle identifying unit 124 .
- the bright object segmentation unit 120 is provided for capturing a plurality of bright objects 20 from the image 2 .
- the connected component object labeling unit 121 is provided for labeling the coordinates of the bright object 20 to form a plurality of paired connected component objects 21 .
- the bright object identifying unit 122 is provided for identifying, analyzing and combining the characteristics of the connected component object 21 to form a plurality of lamp objects 22 .
- the multi-vehicle tracking unit 123 is provided for tracking the trajectory of the lamp objects 22 .
- the vehicle identifying unit 124 is provided for identifying the type of a vehicle having the lamp object 22 .
- the processing module 12 further counts the number of various vehicles, and then controls the display module 11 to display various vehicles. The operation of each element has been described in details in the aforementioned vehicle tracking method, and thus will not be described here again.
- This section describes the implementation of the proposed vehicle detection, tracking and classification system on a DSP-based real-time system.
- the real-time vision system was implemented on a TI DM642 DSP-based embedded platform, operated at 600 MHz with 32 MB DRAM, and set up on elevated platforms near highways and urban roads.
- the detection area for each traffic scene was first determined using a lane detection process.
- the detection area was located along the midline of the traffic scene image, and bounded by the most left and right lane boundaries (as shown in FIG. 7 ), and divided into driving lanes (as show in FIG. 24 ).
- the CCD camera should be set up on an elevated platform with a sufficient height to capture an appropriate region for covering all the driving lanes to be monitored, and the view angles of the CCD camera should be adjusted to be oriented to the monitored region for suitably obtaining the reliable and clear features of vehicle lights.
- the frame rate of this vision system is 30 true-color frames per second, and each frame in the grabbed image sequences measures 320 pixels by 240 pixels.
- the computation required to process one input frame depends on traffic scene complexity. Most of the computation time is spent on the connected-component analysis and the spatial clustering process of lighting objects. For an input video sequence with 320 ⁇ 240 pixels per frame, the proposed real-time system takes an average of 26.3 milliseconds to process each frame on the 600 MHz TI-DM642 DSP-based embedded platform. This minimal computation cost ensures that the proposed system can effectively satisfy the demand of real-time processing at more than 30 frames per second.
- FIG. 20 shows that the proposed system counts the numbers of detected cars and motorbikes appearing in each driving lane of the detection area, and displays the number of detected cars on the top-right of the screen, and the amount of detected motorbikes on the top-left.
- the present invention adopts the Jaccard coefficient, which is commonly used for evaluating performance in information retrieval. This measure is defined as:
- N is the total number of video frames.
- the ground-truth of detected vehicles was obtained by manual counting.
- the vehicle tracking system performs an instant test at the junction of Ci Yun Road and on Kuang Fu Road of Hsinchu, and integrates the test directly into the present close-circuit television (CCTV) system installed at each road junction and each highway in Taipei. From the results of the actual tests, the present invention can be applied for the CCTV image directly without making any change, while maintaining the same detection accuracy and the experiment result of each road section elaborated sequentially as follows.
- CCTV close-circuit television
- the lanes in each image is numbered sequentially from left to right as first lane, second lane, third lane and fourth lane, and the number at the upper left corner of an image indicates the number of motorcycles detected by the method of the present invention, and the number at the upper right corner of the image indicates the number of detected automobiles.
- FIG. 21 for a schematic view of applying the vehicle tracking system and method at the junction of Ci Yun Road of Hsinchu in accordance with the present invention, the test was taken at a road junction, where traffic signal lights are provided for controlling the traffic flow, and thus a vehicle may be in a motion-to-still state or a still-to-motion state, and a good detection rate can be achieved in both states, wherein the images include large-size cars, small-size cars, and motorcycles.
- FIG. 21 shows a sample of a complicated traffic scene from a nighttime urban road at rush hour under a bright environmental illumination condition. Due to traffic signal changes, the vehicles, including large and small cars, and motorbikes, stop and move intermittently. As shown in FIG. 21 , most of these cars and motorbikes are correctly detected, tracked, and classified, although many non-vehicle illuminating objects, such as street lamps, reflected beams, and road reflectors on the ground appear very close to the lights of the detected vehicles. Moreover, as depicted on the right top and right down side of FIG. 21 , most vehicles driving very close to nearby lanes are also successfully discriminated and detected. Table 1 shows the data of the approach on vehicle detection and tracking for the traffic scene of FIG. 21 .
- FIG. 22 for a schematic view of applying the vehicle tracking system and method at the intersection of Chien Kuo South Road and Zhongxiao East Road flyover of Taipei in accordance with the present invention, the traffic is jammed in this road section and the traffic flow is heavy. Since there are no traffic lights in this road section, almost all vehicles are moving very slowly, and the numbers of vehicles in the three lanes are almost the same.
- FIG. 22 discloses another experimental scene of a congested nighttime highway at rush hour under a light environmental illumination condition. These images were obtained by a closed-loop television (CCTV) camera. Since motorbikes are not allowed to drive on highways in Taiwan, only cars appeared in this highway traffic scene. This figure shows that even though multiple vehicles are stopped or moving slowly close to each other in this congested traffic scene, the proposed method still successfully detects and tracks almost all vehicles. Table 2 shows the quantitative results of the proposed approach for vehicle detection on a nighttime highway. Due to the unsatisfactory view angle of the CCTV camera, the 1 st lane is partially occluded. Thus, the vehicle light sets of some few detected cars may be occluded and misclassified as single-light motorbikes. However, this does not significantly influence the determination of typical traffic flow parameters, including congestion, throughput, and queue length.
- CCTV closed-loop television
- this road section has four lanes. Since the test was taken at night, the traffic flow is light and the traffic speed is fast, and vehicles are moving non-stop, and these vehicles include large-size cars, small-sized cars, and motorcycles. In this environment, the angle or height of the installed camera will be the best regardless of the location where the camera is installed.
- FIG. 23 is a nighttime urban traffic scene with a dark environmental illuminated condition and low traffic flow.
- the proposed system correctly detected and tracked nearly all moving cars and motorbikes on a free-flowing urban road by locating, grouping, and classifying their vehicle lights.
- a few detection errors occurred when some cars with broken (single) headlights were misclassified as motorbikes.
- Table 3 depicts the quantitative results of the proposed approach for vehicle detection and tracking on this urban road.
- the vehicle tracking system and method of the present invention can seperate out lamp images for a later image processing, and single lamp is used as a basis for tracking a multiple of lamps.
- the present invention can improve the accuracy of detecting vehicles at night time and overcome the difficulty of identifying the vehicles at night.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
D h(C i ,C j)=max(l c
D v(C i ,C j)=max(t c
P h(C i ,C j)=−D h(C i ,C j)/min[W(C i),W(C j)] (3)
P v(C i ,C j)=−D v(C i ,C j)/min[H(C i),H(C j)] (4)
LW(C i)=|f l+1(C Y(C i))−f l(C Y(C i))| (6)
τRL ≦W(C i)/H(C i)≦τRH (7)
τAL <A(C i)<τAH (8)
D h(C i ,C j)<minW(C i)W(C j) (9)
D v(C i ,C j)<2.0 minH(C i)H(C j) (10)
P h(C i ,C j)>T hp (11)
TP i t = P i 1 ,P i 2 , . . . ,P i t (13)
S o(P i t ,TP j t-1)>τmp (15)
f l(C Y(TP i t-t))<C X(TP i t-t)<f l+1(C Y(TP i t-t)), and
f l(C Y(TP j t-t))<C X(TP j t-t)<f l+1(C Y(TP j t-t)) (17)
H(TP S t-τ)/H(TP L t-τ)>T h (18)
Δx k t-1 =C X(TG k t-1)−C X(TG k t-2)
Δy k t-1 =C Y(TG k t-1)−C Y(TG k t-2) (19)
where CX(TGk t) and CY(TGk t) respectively represent the horizontal and vertical positions of the tracked component group TGk t on the image coordinate, and are defined by CX(TGk t)=(lTG
S o(TP i′ t ,TG k t-1)>τmg (21)
τm1 ≦W(TP i t)/H(TP i t)≦τm2 (22)
TABLE 1 |
Experimental data of the vehicle detection and tracking |
of the traffic scene on the urban road scene in FIG. 21 |
Lane | Detected Vehicles | |
Lane | ||
1 | 921 | 969 |
|
292 | 300 |
Lane 3 | 228 | 233 |
Total No. Cars | 887 | 909 |
Total No. Motorbikes | 584 | 593 |
Detection Score J of Cars | 97.58% | |
Detection Score J of | 98.48% | |
Motorbikes | ||
Time span of the |
50 minutes | |
TABLE 2 |
Experimental data of the vehicle detection |
on a nighttime highway scene in FIG. 22 |
Lane | Detected Vehicles | |
Lane | ||
1 | 1392 | 1428 |
|
1527 | 1535 |
Lane 3 | 1495 | 1536 |
Total No. Cars | 4397 | 4499 |
Detection Rate J of Cars | 97.73% | |
Time span of the |
50 minutes | |
TABLE 3 |
Experimental data of the vehicle detection and |
tracking on the urban road scene in FIG. 23 |
Lane | Detected Vehicles | |
Lane |
1 | 131 | 137 |
|
111 | 113 |
Lane 3 | 67 | 69 |
Lane 4 | 36 | 36 |
Total No. Cars | 163 | 165 |
Total No. Motorbikes | 184 | 190 |
Detection Score J of Cars | 98.79% | |
Detection Score J of | 96.84% | |
Motorbikes | ||
Time span of the |
20 minutes | |
Claims (10)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW099111933 | 2010-04-15 | ||
TW99111933A | 2010-04-15 | ||
TW099111933A TWI408625B (en) | 2010-04-15 | 2010-04-15 | Vehicle tracking system and tracking method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110255740A1 US20110255740A1 (en) | 2011-10-20 |
US8503725B2 true US8503725B2 (en) | 2013-08-06 |
Family
ID=44788232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/905,576 Expired - Fee Related US8503725B2 (en) | 2010-04-15 | 2010-10-15 | Vehicle tracking system and tracking method thereof |
Country Status (2)
Country | Link |
---|---|
US (1) | US8503725B2 (en) |
TW (1) | TWI408625B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140362230A1 (en) * | 2011-10-20 | 2014-12-11 | Xerox Corporation | Method and systems of classifying a vehicle using motion vectors |
CN107578048A (en) * | 2017-08-02 | 2018-01-12 | 浙江工业大学 | A kind of long sight scene vehicle checking method based on vehicle rough sort |
US9898672B2 (en) | 2016-02-02 | 2018-02-20 | Institute For Information Industry | System and method of detection, tracking and identification of evolutionary adaptation of vehicle lamp |
US10997537B2 (en) * | 2017-12-28 | 2021-05-04 | Canon Kabushiki Kaisha | Information processing apparatus, system, method, and non-transitory computer-readable storage medium for adjusting a number of workers in a workshop |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9031948B1 (en) | 2011-07-06 | 2015-05-12 | Shawn B. Smith | Vehicle prediction and association tool based on license plate recognition |
US8768009B1 (en) | 2011-07-26 | 2014-07-01 | Shawn B. Smith | Locating persons of interest based on license plate recognition information |
US10018703B2 (en) * | 2012-09-13 | 2018-07-10 | Conduent Business Services, Llc | Method for stop sign law enforcement using motion vectors in video streams |
TWI459332B (en) * | 2012-05-15 | 2014-11-01 | Ind Tech Res Inst | Method and system for integrating multiple camera images to track vehicle |
CN103150898B (en) * | 2013-01-25 | 2015-07-29 | 大唐移动通信设备有限公司 | A kind of vehicle detection at night method, tracking and device |
JP6094252B2 (en) * | 2013-02-20 | 2017-03-15 | 株式会社デンソー | Road sign recognition device |
TWI498527B (en) * | 2014-01-28 | 2015-09-01 | Chunghwa Telecom Co Ltd | Submarine Vehicle Surrounding System and Method |
TWI638569B (en) * | 2014-11-05 | 2018-10-11 | 晶睿通訊股份有限公司 | Surveillance system and surveillance method |
CN104809470B (en) * | 2015-04-23 | 2019-02-15 | 杭州中威电子股份有限公司 | A kind of vehicle based on SVM drives in the wrong direction detection device and detection method |
ITUB20154942A1 (en) * | 2015-10-23 | 2017-04-23 | Magneti Marelli Spa | Method to detect an incoming vehicle and its system |
US10789727B2 (en) * | 2017-05-18 | 2020-09-29 | Panasonic Intellectual Property Corporation Of America | Information processing apparatus and non-transitory recording medium storing thereon a computer program |
CN109934079A (en) * | 2017-12-18 | 2019-06-25 | 华创车电技术中心股份有限公司 | Low lighting environment object monitoring device and its monitoring method |
US10540554B2 (en) * | 2018-03-29 | 2020-01-21 | Toyota Jidosha Kabushiki Kaisha | Real-time detection of traffic situation |
CN109448052B (en) * | 2018-08-29 | 2022-01-28 | 浙江大丰实业股份有限公司 | Follow spot driving mechanism based on image analysis |
TWI686748B (en) * | 2018-12-07 | 2020-03-01 | 國立交通大學 | People-flow analysis system and people-flow analysis method |
CN109740595B (en) * | 2018-12-27 | 2022-12-30 | 武汉理工大学 | Oblique vehicle detection and tracking system and method based on machine vision |
CN110188631A (en) * | 2019-05-14 | 2019-08-30 | 重庆大学 | A kind of freeway tunnel car light dividing method |
CN111275981A (en) * | 2020-01-21 | 2020-06-12 | 长安大学 | Method for identifying starting brake lamp and double-flashing lamp of highway vehicle |
CN111723650A (en) * | 2020-05-09 | 2020-09-29 | 华南师范大学 | Night vehicle detection method, device, equipment and storage medium |
TWI773112B (en) * | 2021-01-29 | 2022-08-01 | 財團法人資訊工業策進會 | Road surveillance system, apparatus, and method |
CN113096051B (en) * | 2021-04-30 | 2023-08-15 | 上海零眸智能科技有限公司 | Map correction method based on vanishing point detection |
-
2010
- 2010-04-15 TW TW099111933A patent/TWI408625B/en not_active IP Right Cessation
- 2010-10-15 US US12/905,576 patent/US8503725B2/en not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
Yen-Lin Chen; Bing-Fei Wu; Chung-Jui Fan; , "Real-time vision-based multiple vehicle detection and tracking for nighttime traffic surveillance," Systems, Man and Cybernetics, 2009. SMC 2009. IEEE International Conference on , vol., No., pp. 3352-3358, Oct. 11-14, 2009. * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140362230A1 (en) * | 2011-10-20 | 2014-12-11 | Xerox Corporation | Method and systems of classifying a vehicle using motion vectors |
US9286516B2 (en) * | 2011-10-20 | 2016-03-15 | Xerox Corporation | Method and systems of classifying a vehicle using motion vectors |
US9898672B2 (en) | 2016-02-02 | 2018-02-20 | Institute For Information Industry | System and method of detection, tracking and identification of evolutionary adaptation of vehicle lamp |
CN107578048A (en) * | 2017-08-02 | 2018-01-12 | 浙江工业大学 | A kind of long sight scene vehicle checking method based on vehicle rough sort |
US10997537B2 (en) * | 2017-12-28 | 2021-05-04 | Canon Kabushiki Kaisha | Information processing apparatus, system, method, and non-transitory computer-readable storage medium for adjusting a number of workers in a workshop |
Also Published As
Publication number | Publication date |
---|---|
TW201135680A (en) | 2011-10-16 |
US20110255740A1 (en) | 2011-10-20 |
TWI408625B (en) | 2013-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8503725B2 (en) | Vehicle tracking system and tracking method thereof | |
Chen et al. | Nighttime vehicle detection for driver assistance and autonomous vehicles | |
Chen et al. | A real-time vision system for nighttime vehicle detection and traffic surveillance | |
TWI302879B (en) | Real-time nighttime vehicle detection and recognition system based on computer vision | |
US8798314B2 (en) | Detection of vehicles in images of a night time scene | |
US8199198B2 (en) | Bright spot detection and classification method for a vehicular night-time video imaging system | |
Messelodi et al. | A computer vision system for the detection and classification of vehicles at urban road intersections | |
O'malley et al. | Vision-based detection and tracking of vehicles to the rear with perspective correction in low-light conditions | |
US8019157B2 (en) | Method of vehicle segmentation and counting for nighttime video frames | |
EP2383679A1 (en) | Detecting and recognizing traffic signs | |
US7577274B2 (en) | System and method for counting cars at night | |
JP2003067752A (en) | Vehicle periphery monitoring device | |
CN107180230B (en) | Universal license plate recognition method | |
Chen et al. | Real-time vision-based multiple vehicle detection and tracking for nighttime traffic surveillance | |
Ponsa et al. | On-board image-based vehicle detection and tracking | |
Chen et al. | Embedded vision-based nighttime driver assistance system | |
Chen et al. | Traffic congestion classification for nighttime surveillance videos | |
Chen et al. | Embedded on-road nighttime vehicle detection and tracking system for driver assistance | |
Chen et al. | Vision-based nighttime vehicle detection and range estimation for driver assistance | |
CN108242183B (en) | traffic conflict detection method and device based on width characteristic of moving target mark frame | |
US20160364618A1 (en) | Nocturnal vehicle counting method based on mixed particle filter | |
Fu et al. | Vision-based preceding vehicle detection and tracking | |
Wu et al. | A new vehicle detection approach in traffic jam conditions | |
Wangsiripitak et al. | Traffic light and crosswalk detection and localization using vehicular camera | |
Oh et al. | Development of an integrated system based vehicle tracking algorithm with shadow removal and occlusion handling methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NATIONAL CHIAO TUNG UNIVERSITY, TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WU, BING-FEI;CHEN, YEN-LIN;HUANG, HAO-YU;AND OTHERS;REEL/FRAME:025148/0162 Effective date: 20100921 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210806 |