CN104252629A - Target Detection And Tracking Method And System - Google Patents

Target Detection And Tracking Method And System Download PDF

Info

Publication number
CN104252629A
CN104252629A CN201310349397.0A CN201310349397A CN104252629A CN 104252629 A CN104252629 A CN 104252629A CN 201310349397 A CN201310349397 A CN 201310349397A CN 104252629 A CN104252629 A CN 104252629A
Authority
CN
China
Prior art keywords
picture
viewing area
unique point
tracing
extract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310349397.0A
Other languages
Chinese (zh)
Inventor
范钦雄
赵修鼎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN104252629A publication Critical patent/CN104252629A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A target detection and tracking method and system. In the method, a frame in a video is extracted as a first frame, and the first frame is scanned to detect a display area of an object in the first frame. Then, the target object is tracked according to the area ratio of the display area occupying the first picture and at least one reference picture which is continuous behind the first picture in the video, and the anti-scaling and anti-rotation characteristics of the tracked target object are recorded. Then, the anti-zooming and anti-rotation features are utilized to recognize whether the target object appears in a specific frame.

Description

Target detection and method for tracing and system
Technical field
The present invention relates to a kind of target detection and method for tracing and system, and in complex environment, carry out instant detect and the method and system of following the trail of for one or more target in particular to a kind of.
Background technology
In recent years, vision monitoring technology constantly progresses greatly and is widely used among every field.With intelligent transportation system (Intelligent Transportation System, ITS) be example, it uses vision monitoring technology to carry out finding range and dynamic object detects to provide and warns the function such as driver's driving behavior improperly and adaptability speeds control in advance, collision free like this produces to prevent accident, and can guarantee that road speed is no more than safe range.
Existing ranging technology adopts the hardware based mode such as radar or laser to carry out, although but technology based on radar has degree of depth message and longer detecting distance accurately, the hardware cost of costliness causes its universalness not easily.Existing dynamic object detection technique then easily causes high False Rate under the environment with complex background.At present carrying out detecting to object needs just can reach good effect under more simple background environment with the technology of following the trail of mostly, and in order to reach the computing cost of sane Detection results necessary at substantial again, is difficult to take system effectiveness and accuracy rate into account simultaneously.
Summary of the invention
In view of this, the invention provides a kind of target detection and method for tracing and system, can, picture large in shading value difference have dynamically frequent, and in the multiple environment such as complex degree of background is high, provide quick and sane object to detect and tracking mechanism.
Target detection of the present invention and method for tracing comprise the following steps: first to extract a picture (frame is referred to as again " shadow lattice ") in video as the first picture, and scan the first picture to detect the viewing area of object in the first picture.Then, occupy the area ratio of the first picture according to viewing area, and at least one reference picture continued in video after the first picture is to the thing that follows the trail of the objective, and records the nonshrink of tracked object and put and anti-rotational feature.Subsequently, nonshrink to put and anti-rotational feature is carried out identification object and whether appeared in specific picture is utilized.
In one embodiment of the invention, target detection and method for tracing also comprise: obtain the positive sample of multiple training and multiple training negative sample, extract the textural characteristics of the positive sample of each training and each training negative sample, and utilize the textural characteristics of the positive sample of each training and each training negative sample, and a Multiple Classifier Fusion algorithm carrys out training objective thing sorter.
In one embodiment of this invention, above-mentioned scanning first picture comprises with the step detecting the viewing area of object in the first picture: with n object set of classifiers cascaded-type (cascade) sorter, wherein n is positive integer.Tandem type sorter is utilized to carry out n scanning sequence to the first picture, to detect the viewing area of object in the first picture.Wherein scanning sequence each time comprises: in the first picture according to specific direction mobility detect form with complete scan first picture, and reduce the size of the first picture subsequently.
In one embodiment of this invention, the above-mentioned area ratio occupying the first picture according to viewing area, and at least one reference picture continued in video after the first picture, the step that object is followed the trail of is comprised: judge whether area ratio exceedes preset value.If area ratio is no more than preset value, then extract at least one unique point about object from viewing area, and utilize the above-mentioned unique point and above-mentioned reference picture extracted from viewing area, object is followed the trail of.If area ratio exceedes preset value, then from the multiple subregions in viewing area, extract at least one unique point about object respectively, and utilize the above-mentioned unique point and above-mentioned reference picture extracted from above-mentioned subregion, object is followed the trail of.
In one embodiment of this invention, above-mentioned subregion is upper left corner area and the lower right field of viewing area.
In one embodiment of this invention, above-mentioned subregion does not overlap each other or partly overlaps.
In one embodiment of this invention, above-mentioned reference picture comprises: adjacent to the second picture of the first picture, and adjacent to the 3rd picture of the second picture.And the area ratio of the first picture is occupied according to viewing area, and the reference picture continued in video after the first picture, the step that object is followed the trail of is comprised: extract at least one unique point about object from the first picture, according to Kalman (Kalman filter) wave filter, calculate and estimate scope in the second picture, and perform light stream and follow the trail of (Optical flow) algorithm and estimate to extract from the above-mentioned unique point of the first picture mobile message estimated in scope at the second picture.Then, at least one unique point about object is extracted from the second picture, calculate according to Kalman filter and estimate scope in the 3rd picture, and perform light stream tracing algorithm to estimate and extract from the above-mentioned unique point of the second picture mobile message estimated in scope at the 3rd picture.
In one embodiment of this invention, target detection and method for tracing also comprise: utilize and accelerate robust feature and extract (Speeded Up Robust Features, SURF) algorithm and set up nonshrink putting and anti-rotational feature.
In one embodiment of this invention, above-mentioned specific picture is any picture in sequential after all reference pictures, and utilize nonshrink to put and anti-rotational feature is carried out the step whether identification object appear in specific picture and comprised: extract feature to be identified from specific picture, comparison feature to be identified and nonshrinkly to put and anti-rotational feature.When feature to be identified meet nonshrink put and anti-rotational feature time, then judge object come across in specific picture.
Target detection of the present invention and tracing system comprise: the storage unit mutually coupled and processing unit.The multiple module of unit records, and processing unit accesses and the above-mentioned module performed in storage unit.Above-mentioned module comprises: detection module, tracing module and recognition module.A picture in detection module extraction video as the first picture, and scans the first picture to detect the viewing area of object in the first picture.Tracing module occupies the area ratio of the first picture according to viewing area, and at least one reference picture continued in video after the first picture is to the thing that follows the trail of the objective, and records the nonshrink of tracked object and put and anti-rotational feature.Recognition module utilizes nonshrink to put and anti-rotational feature is carried out identification object and whether appeared in specific picture.
Based on above-mentioned, target detection of the present invention and method for tracing and system utilize the object come based on the algorithm of machine learning and tandem type framework in instant detection of complex dynamic scene, recycle the tracking mechanism in conjunction with light stream tracing algorithm and Kalman filter, object is followed the trail of.Target detection of the present invention and method for tracing and system can be widely used in various environment, and provide quick and sane detection and follow the trail of effect.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Accompanying drawing explanation
Fig. 1 is the calcspar of target detection and the tracing system illustrated according to one embodiment of the invention.
Fig. 2 is the process flow diagram of target detection and the method for tracing illustrated according to one embodiment of the invention.
Fig. 3 is the schematic diagram of the training objective thing sorter illustrated according to one embodiment of the invention.
Fig. 4 is the schematic diagram of the tandem type sorter illustrated according to one embodiment of the invention.
Fig. 5 is the schematic diagram that the tandem type sorter illustrated according to one embodiment of the invention carries out scanning sequence for several times.
Fig. 6 A to 6C is the schematic diagram of the unique point of the extraction object illustrated according to the several embodiment of the present invention.
Fig. 7 be according to one embodiment of the invention illustrate in adjacent pictures, to the schematic diagram that object is followed the trail of.
Fig. 8 is the process flow diagram whether the identification object illustrated according to one embodiment of the invention appears at specific picture.
[symbol description]
100: target detection and tracing system
110: storage unit
111: training module
113: detection module
115: tracing module
117: recognition module
120: processing unit
S210 ~ S230: each step of the target detection described in one embodiment of the invention and method for tracing
310: train positive sample
320: training negative sample
330: feature extraction unit
340: training unit
400: tandem type sorter
C 1, C 2, C 3, C 4: object sorter
F 1: the first picture
NDR 1, NDR 2, NDR 3, NDR 4: non-targeted object area
DR: viewing area
W: detect form
60: object
63: viewing area
63 ': reduce region
65,67: subregion
+: unique point
R 1, R 2, R 3: estimate scope
O 1, O 2, O 3: light stream operating region
S810 ~ S850: whether the identification object described in one embodiment of the invention appears at each step of specific picture
Embodiment
Fig. 1 is the calcspar of target detection and the tracing system illustrated according to one embodiment of the invention.Refer to Fig. 1, target detection and the tracing system 100 of the present embodiment can be implemented into computer system, workstation, server, or any electronic installation possessing computing and processing power.Target detection and tracing system 100 comprise: the storage unit 110 and the processing unit 120 that mutually couple, and its function is described below:
Storage unit 110 is such as random access memory (Random Access Memory, RAM), ROM (read-only memory) (Read-Only Memory, ROM), the combination of flash memory (Flash memory), hard disk or other similar devices or these devices, and in order to record multiple modules that can be performed by processing unit 120, these modules can be loaded into processing unit 120 with the detection of performance objective thing and tracking function.
Processing unit 120 can be CPU (central processing unit) (Central Processing Unit, CPU), or the microprocessor (Microprocessor) of other programmable general services or specific use, digital signal processor (Digital Signal Processor, DSP), Programmable Logic Controller, Application Specific Integrated Circuit (Application Specific Integrated Circuits, ASIC), the combination of programmable logic device (Programmable Logic Device, PLD) or other similar devices or these devices.Processing unit 120 can access and perform record modules in memory cell 110, makes target detection and tracing system 100 carry out immediately detecting and following the trail of to one or more object.
Record module in memory cell 110 comprises: training module 111, detection module 113, tracing module 115 and recognition module 117, above-mentioned module can be computer program and under the execution of processing unit 120 detection of realize target thing and tracking function; Namely the Detailed Operation mode of target detection and tracing system 100 is below described for embodiment.
Fig. 2 is the process flow diagram of target detection and the method for tracing illustrated according to one embodiment of the invention, please refer to Fig. 1 and Fig. 2.In the present embodiment, target detection and tracing system 100 can receive by web camera (or other video capture devices) at various environment (such as, highway, suburbs or drive, but be not limited thereto) under shooting video (video, be referred to as again " film "), and carry out detecting to the object belonging to particular types in video and follow the trail of.Wherein, video comprises the picture (frame) of multiple static state, and object can be vehicle, face, or the object of other kinds.First, in step S210, detection module 113 extracts a picture (shadow lattice) in video as the first picture (the first shadow lattice), and scans the first picture, to detect the viewing area of object in the first picture.
In detail, in order to allow detection module 113 on line detection-phase can detect whether object appears at a certain picture (shadow lattice), and the viewing area of object in this picture, in the present embodiment, training module 111 will obtain a large amount of training positive and negative samples in off-line training step, and is trained the object sorter distinguishing object and background by machine learning method.Fig. 3 is the schematic diagram of the training objective thing sorter illustrated according to one embodiment of the invention.As shown in Figure 3, training module 111 obtains the positive sample of multiple training 310 and multiple training negative sample 320.Wherein, train positive sample 310 to be the pictures comprising the object with object with identical type, train negative sample 320 to be then the picture not comprising the object with object with identical type.Training module 111 utilizes feature extraction unit 330 to extract the positive sample 310 of all training and the textural characteristics of training negative sample 320 respectively.For example, feature extraction unit 330 can be carried out fast fetching in conjunction with integral image techniques and obtain each and to train in positive sample 310 and training negative sample 320 significantly textural characteristics, above-mentioned textural characteristics is such as class Ha Er (Haar-like) feature, but the present invention is not limited thereto.Thereafter, training module 111 utilizes the textural characteristics of training unit 340 to the positive sample of each training and each training negative sample to perform a Multiple Classifier Fusion (Classifier Fusion) algorithm, and then trains an object sorter.For example, training unit 340 goes out one with adaptability gain (Adaptive Boosting) Algorithm for Training can distinguish positive sample (namely, with object, there is the object of identical type) and negative sample is (namely, with object, there is different types of object) object sorter, this object sorter is made up of multiple Weak Classifier, and this to distinguish the primary element for the object detected.
Detection-phase on line, in order to take into account detection speed and accuracy, detection module 113 with n (n is for positive integer) object sorter composition one-level connection formula (cascade) sorter, and utilizes tandem type sorter to carry out n scanning sequence to detect the viewing area of object in the first picture to the first picture.Can get rid of as early as possible in the first picture due to tandem type sorter can not be the region of object, therefore significantly can promote detection speed.Fig. 4 is the schematic diagram of the tandem type sorter illustrated according to one embodiment of the invention, refers to Fig. 4, in this example, supposes that detection module 113 is with 4 object sorter C 1to C 4composition tandem type sorter 400.First picture F 1can through object sorter C 1to C 4detection layer by layer, and the object sorter of each level can from the first picture F 1middle eliminating does not belong to non-targeted object area (that is, the non-targeted object area NDR of object 1to NDR 4), and by all object sorter C 1to C 4region be the viewing area DR of object.
Fig. 5 is the schematic diagram that tandem type sorter 400 carries out 4 scanning sequences.Refer to Fig. 5, in order to find out object at the first picture F 1in viewing area, in scanning sequence each time, tandem type sorter 400 can at the first picture F 1according to specific direction mobility detect form W, with complete scan first picture F 1, specific direction is such as first scan the first picture F from top to bottom from left to right, again 1.Subsequently, tandem type sorter 400 reduces the first picture F 1size, then carry out scanning sequence next time, the first picture F can be found out accordingly 1in, there is the viewing area of the object of different size.Scan mode shown in Fig. 5 is also called the scanning of image pyramid formula.
As mentioned above, detection module 113 can detect in the first picture, belongs to the viewing area of all objects of particular types, but for convenience of description, in following hypothesis first picture, only have an object, therefore detection module 113 only detects the viewing area of an object.
Detect the viewing area of object at detection module 113 after, the tracing module 115 of the present embodiment can utilize the tracking mechanism following the trail of (Optical flow) algorithm in conjunction with Kalman filter (Kalman filter) and light stream, reaches quick and stable object and follows the trail of effect.In detail, as shown in step S220, tracing module 115 occupies the area ratio of the first picture according to viewing area, and at least one reference picture continued in video after the first picture is to the thing that follows the trail of the objective, and records the nonshrink of tracked object and put and anti-rotational feature.In the present embodiment, tracing module 115 only can perform light stream tracing algorithm to certain part of picture (or some part), and can not follow the trail of whole picture.For example, after detection module 113 detects viewing area, tracing module 115 only utilizes light stream tracing algorithm, computing is carried out to viewing area (or one or more the less region in viewing area), for the unique point extracted from the subregion of current picture, in next picture, find out the shift position of unique point, to carry out the tracking of object.Illustrate how tracing module 115 extracts the unique point for light stream tracing algorithm with several example below.
In general, due to video capture is object mobile status in the environment, and therefore the size of same object in different pictures is also not quite similar.When the object for following the trail of is very when the web camera of capture video, the area ratio that the viewing area of object is occupied in picture is just quite large.Because tracing module 115 needs to carry out calculation process to larger region, the situation that operation efficiency thus can be caused low produces.In order to promote processing speed, for object excessive in picture, tracing module 115 only utilizes the information of several regional areas of object to carry out tracking process, so just can reduce cost operation time of object excessive in picture.
Specifically, in the present embodiment, tracing module 115 can judge that whether the viewing area of object occupies the area ratio of the first picture more than a preset value, and preset value is relevant with the resolution of video and the size of picture, can determine its suitable numerical value via experiment.If area ratio is no more than preset value, represent that object is unlikely to excessive compared to the first picture, therefore object is considered as an entirety to process by tracing module 115, and at least one unique point extracted from viewing area about object, and utilize and extract above-mentioned unique point from viewing area and reference picture is followed the trail of object.If area ratio exceedes preset value, represent that object occupies larger area in the first picture, object can be divided into several part to follow the trail of by tracing module 115.Base this, tracing module 115 defines multiple subregion in viewing area, and respectively from these subregions, extracts at least one unique point about object, and recycling is extracted and followed the trail of object from the unique point of all subregion and reference picture.
Fig. 6 A to 6C is the schematic diagram of the unique point of the extraction object illustrated according to several embodiment of the present invention.In fig. 6, hypothetical target thing 60(vehicle) at the first picture F 1in viewing area 63 occupy the first picture F 1area ratio do not exceed preset value.In one embodiment, tracing module 115, directly from viewing area 63, extracts one or more unique point about object.In another embodiment, because the background not belonging to object may be contained in the edge of viewing area 63, in order to avoid the unique point extracted belongs to background area, tracing module 115 can obtain one that viewing area 63 inside contracts to center and reduce region 63 ', and from then on reduce in region 63 ', extract one or more unique point (in fig. 6, representing with symbol "+") about object 60.As long as but viewing area 63 occupies the first picture F 1area ratio be no more than preset value, object just can be considered as an entirety to follow the trail of by tracing module 115.
In the example shown in Fig. 6 B, hypothetical target thing 60 is at the first picture F 1in viewing area 63 occupy the first picture F 1area ratio exceed preset value, tracing module 115 in order to object 60 is divided into several part to follow the trail of, first at the first picture F 1middle definition two sub regions 65 and 67, and in subregion 65 and 67, extract one or more unique point (in fig. 6b, representing with symbol "+") about object 60 respectively.In the present embodiment, subregion 65 is the upper left corner area of viewing area 63, and subregion 67 is the lower right field of viewing area 63, and now two sub regions overlap.Should be specified, the size of subregion 65 and 67 can be default fixed size, also can with the size dynamic conditioning of viewing area 63.And the background not belonging to object may be comprised due to the edge of viewing area 63, in order to avoid the unique point extracted is arranged in background, therefore the position of subregion 65 and 67 completely near the edge of viewing area 63, but can't inside contract a specific range to the center of viewing area 63.
In the embodiment shown in Fig. 6 C, hypothetical target thing 60 is at the first picture F 1in viewing area 63 occupy the first picture F 1area ratio exceed preset value, tracing module 115 is at the first picture F 1in, define two sub regions 65 and 67, and respectively in subregion 65 and 67, extract one or more unique point (in figure 6 c, representing with symbol "+") about object 60, and then object 60 is divided into several part to follow the trail of.As shown in Figure 6 C, in this example, subregion 65 and 67 non-overlapping copies each other.
In the embodiment of Fig. 6 B and 6C, although be the upper left corner area and the lower right field that two sub regions are defined as viewing area 63, the present invention is not limited thereto.In other words, in other embodiments, tracing module 115 is the plural subregion of definable also, and the position of all subregion is also not limited to upper left corner area or the lower right field of viewing area.
On the other hand, traditional light stream tracing algorithm is after extract minutiae, and the trend for same set of unique point is followed the trail of.But because the present embodiment is applied in the large complex environment of light and shade variable quantity, in order to adapt to scene changes, a set of unique point that tracing module 115 is extracted, only can be used for carrying out a light stream tracing program at every turn.In detail, tracing module 115 is for each picture, all will extract a set of unique point, and for this cover unique point, only carry out a light stream tracing program, immediately in new picture, again extract new a set of unique point, because the service time of often overlapping unique point is very short, traditional light stream tracing algorithm so can be avoided must to maintain the restriction of consistent brightness.In the present embodiment, tracing module 115 utilizes Kalman filter, estimates the scope that unique point may occur in next picture, to take into account tracking speed and accuracy.
Specifically, suppose that the reference picture being used to follow the trail of the objective thing comprises: adjacent to the second picture of the first picture, and adjacent to the 3rd picture of the second picture.First, tracing module 115 is from the first picture, extract at least one unique point (below extraction being called the first stack features point from the above-mentioned unique point of the first picture) about object, the first stack features point extracts from the viewing area of object at the first picture.Then, tracing module 115 is according to Kalman filter, and calculate and estimate scope in the second picture, this estimates scope is the region that the first stack features o'clock may be positioned in the second picture.Subsequently, tracing module 115 performs light stream tracing algorithm and carrys out the actual estimated first stack features o'clock mobile message estimated in scope at the second picture.After completing the light stream tracing program between first and second picture, tracing module 115 gives up the first stack features point, and again from the second picture extract about object at least one unique point (such as, from object extract minutiae the viewing area of the second picture, or unique point is extracted from one or more zonule less than viewing area, below extraction is called the second stack features point from the above-mentioned unique point of the second picture), again according to Kalman filter, calculate scope of estimating in the 3rd picture (namely, the region that second stack features o'clock may be positioned in the 3rd picture), and perform light stream tracing algorithm and carry out the actual estimated second stack features o'clock mobile message estimated in scope at the 3rd picture, so complete the second and the 3rd light stream tracing program between picture, by that analogy.
Tracing module 115 that what Fig. 7 illustrated is to utilize in video three adjacent pictures to the schematic diagram of the thing that follows the trail of the objective.Wherein, scope R is estimated 1, R 2, R 3be calculate according to Kalman filter and obtain, this is extract the region that may occur from the unique point of last picture.And less light stream operating region O 1, O 2, O 3, be then extract to be located in part from the actual of unique point of last picture.Due to object in the environment movement not of uniform size in each picture of object can be made to cause, therefore as shown in Figure 7, estimate scope R 1, R 2, R 3and light stream operating region O 1, O 2, O 3size can change to some extent.But the reciprocation of light stream tracing algorithm and Kalman filter can avoid the result causing because of above-mentioned situation following the trail of error.What must illustrate is, although Fig. 7 is the mode that the thing that follows the trail of the objective is described with three adjacent pictures, but the quantity of picture is not limited to three, as long as follow the trail of in conjunction with light stream tracing algorithm and Kalman filter, and extraction only carries out a light stream tracing program from a set of unique point of a picture, just belongs to the scope of tracing module 115 of the present invention.
In the present embodiment, follow the trail of once to an object, tracing module 115 facility extracts (Speeded Up Robust Features, SURF) algorithm with accelerating robust feature, set up the nonshrink of object to put and anti-rotational feature, and stored.Nonshrinkly to put and anti-rotational feature allows target detection and tracing system 100 can do long-term tracking to this object, no matter whether object once disappeared to video, as long as this object occurs in video again, then target detection and tracing system 100 just can utilize its nonshrink put and anti-rotational feature to confirm the identity of object.As shown in the step S230 of Fig. 2, recognition module 117 utilizes nonshrink to put and whether anti-rotational feature identification object appears in specific picture.For example, specific picture refer to and sequential appear at reference picture after any picture.
Fig. 8 is the process flow diagram whether the identification object illustrated according to one embodiment of the invention appears at specific picture.Refer to Fig. 8, first as shown in step S810, recognition module 117 extracts feature to be identified from specific picture.Then, in step S820, recognition module 117 is by feature to be identified and previously storedly nonshrinkly to put and anti-rotational feature is compared.For example, recognition module 117 is that employing K nearest person (Kth Nearest Neighbor, KNN) algorithm is to action of comparing.Next, as shown in step S830, recognition module 117, according to comparison result, judges whether feature to be identified meets nonshrink putting and anti-rotational feature.If feature to be identified does not meet nonshrink putting and anti-rotational feature, as shown in step S840, recognition module 117 judges that object does not come across in specific picture.Otherwise if feature to be identified meets nonshrink putting and anti-rotational feature, then as shown in step S850, recognition module 117 judges that object comes across in specific picture.
In the above-described embodiments, although be carry out detecting for an object and follow the trail of, must illustrate, target detection and tracing system 100 can carry out synchronous detection and tracking for the several objects in video.When detection module 113 detects the viewing area of object in new picture, tracing module 115 first can judge whether this object is the object previously having started to follow the trail of.Then by following the trail of in conjunction with the tracking mechanism of light stream tracing algorithm and Kalman filter, and store and follow the trail of result (such as, the position of object and size) next.
In sum, target detection of the present invention and method for tracing and system, first utilize Multiple Classifier Fusion algorithm and extract the textural characteristics from a large amount of training positive and negative samples, train object sorter, to distinguish object in picture and background area, accordingly, even if under dynamic environment, the position of object can still be detected.And for the object detected, then utilize the tracking mechanism in conjunction with Kalman filter and light stream tracing algorithm further, thus reach quick and stable tracking effect.Once object is tracked, then immediately set up and store the nonshrink of this object and put and anti-rotational feature, thus, even if object temporary extinction is in video, when this object appears at video again, nonshrink to put and anti-rotational feature picks out the identity of this object can be utilized.Target detection of the present invention and method for tracing and system can be issued to the effect of tracking fast and accurately in dynamic environment.
Although the present invention with embodiment openly as above; so itself and be not used to limit the present invention, those skilled in the art, without departing from the spirit and scope of the present invention; when doing a little change and retouching, therefore protection scope of the present invention is when being as the criterion depending on appended claims confining spectrum.

Claims (10)

1. target detection and a method for tracing, comprising:
Extract a picture in a video as one first picture, and scan this first picture, to detect the viewing area of an object in this first picture;
Occupy an area ratio of this first picture according to this viewing area, and at least one reference picture continued in this video after this first picture is to follow the trail of this object, and records primary antibodie convergent-divergent and the anti-rotational feature of this tracked object; And
Utilize this nonshrinkly to put and anti-rotational feature, whether this object of identification appears in a specific picture.
2. the method for claim 1, wherein extract this picture in this video as this first picture before, the method also comprises: obtain the positive sample of multiple training and multiple training negative sample; Each these of extraction train positive sample and respectively these train a textural characteristics of negative sample; And each these of utilization train positive sample and respectively these train this textural characteristics of negative sample, and a Multiple Classifier Fusion algorithm, train an object sorter,
And scan this first picture, to detect the step of this viewing area of this object in this first picture, comprising: with n this object sorter composition one-level connection formula sorter, wherein n is positive integer; And utilize this tandem type sorter to carry out n scanning sequence to this first picture, to detect this object this viewing area in this first picture, wherein respectively this n time scanning sequence comprises: in this first picture, move a detection form according to a specific direction, also reduce the size of this first picture with this first picture of complete scan subsequently.
3. the method for claim 1, wherein occupies this area ratio of this first picture according to this viewing area, and this at least one reference picture after this first picture that continues in this video, to the step that this object is followed the trail of, comprising:
Judge that whether this area ratio is more than a preset value;
If not, then extract at least one unique point about this object from this viewing area, and utilize this at least one unique point and this at least one reference picture of extracting from this viewing area, this object is followed the trail of; And
If, then from the multiple subregions in this viewing area, extract at least one unique point about this object respectively, and utilize this at least one unique point and this at least one reference picture of extracting from these subregions, this object is followed the trail of, wherein these subregions are a upper left corner area and a lower right field of this viewing area, and these subregions do not overlap each other or partly overlap.
4. the method for claim 1, wherein this at least one reference picture comprises: adjacent to one second picture of this first picture, and adjacent to one the 3rd picture of this second picture, and this area ratio of this first picture is occupied according to this viewing area, and this at least one reference picture continued in this video after this first picture, to the step that this object is followed the trail of, comprising:
At least one unique point about this object is extracted from this first picture;
According to a Kalman filter, calculate one in this second picture and estimate scope;
Perform a light stream tracing algorithm to estimate to extract from this at least one unique point of this first picture, estimate the mobile message in scope at this of this second picture;
At least one unique point about this object is extracted from this second picture;
According to this Kalman filter, calculate one in the 3rd picture and estimate scope; And
Perform this light stream tracing algorithm to estimate to extract from this at least one unique point of this second picture, estimate the mobile message in scope at this of the 3rd picture.
5. the method for claim 1, wherein this specific picture is any picture in sequential after this at least one reference picture, this is nonshrink puts and anti-rotational feature utilizes an acceleration robust feature extraction algorithm to set up, and utilize this nonshrinkly to put and anti-rotational feature, whether this object of identification appears at the step in this specific picture, comprising:
A feature to be identified is extracted from this specific picture;
This feature to be identified of comparison and this nonshrinkly to put and anti-rotational feature; And
When this feature to be identified meet this nonshrink put and anti-rotational feature time, judge that this object comes across in this specific picture.
6. target detection and a tracing system, comprising:
One storage unit, records multiple module; And
One processing unit, couples this storage unit, and to access and to perform these modules in this storage unit, these modules comprise:
One detection module, extracts a picture in a video as one first picture, and scans this first picture, to detect the viewing area of an object in this first picture;
One tracing module, an area ratio of this first picture is occupied according to this viewing area, and at least one reference picture continued in this video after this first picture is to follow the trail of this object, and record primary antibodie convergent-divergent and the anti-rotational feature of this tracked object; And
One recognition module, utilize this nonshrinkly to put and anti-rotational feature, whether this object of identification appears in a specific picture.
7. target detection as claimed in claim 6 and tracing system, wherein these modules, also comprise:
One training module, obtain the positive sample of multiple training and multiple training negative sample, each these of extraction train positive sample and respectively these train a textural characteristics of negative sample, and each these of utilization train positive sample and respectively these train this textural characteristics of negative sample, and a Multiple Classifier Fusion Algorithm for Training one object sorter
And this detection module is with n this object sorter composition one-level connection formula sorter, and utilize this tandem type sorter to carry out n scanning sequence to this first picture, to detect this object this viewing area in this first picture, wherein respectively this n time scanning sequence comprises: in this first picture, a detection form is moved according to a specific direction, also reduce the size of this first picture subsequently with this first picture of complete scan, and n is positive integer.
8. target detection as claimed in claim 6 and tracing system, wherein this tracing module judges that whether this viewing area occupies this area ratio of this first picture more than a preset value,
If not, then extract at least one unique point about this object from this viewing area, and utilize and extract this at least one unique point from this viewing area and this at least one reference picture is followed the trail of this object,
If, then from the multiple subregions in this viewing area, extract at least one unique point about this object respectively, and utilize to extract and from this at least one unique points of these subregions and this at least one reference picture, this object is followed the trail of, wherein these subregions are a upper left corner area and a lower right field of this viewing area, and these subregions do not overlap each other or partly overlap.
9. target detection as claimed in claim 6 and tracing system, wherein this at least one reference picture comprises: adjacent to one second picture of this first picture, and adjacent to one the 3rd picture of this second picture, and this tracing module extracts at least one unique point about this object from this first picture, according to a Kalman filter, calculate one in this second picture and estimate scope, and perform a light stream tracing algorithm and estimate to extract the mobile message estimated at this of this second picture from this at least one unique point of this first picture in scope, and extract at least one unique point about this object from this second picture, according to this Kalman filter, calculate one in the 3rd picture and estimate scope, and perform this light stream tracing algorithm and estimate to extract the mobile message estimated at this of the 3rd picture from this at least one unique point of this second picture in scope.
10. target detection as claimed in claim 6 and tracing system, wherein this specific picture is any picture in sequential after this at least one reference picture, this tracing module utilizes an acceleration robust feature extraction algorithm, set up this nonshrinkly to put and anti-rotational feature, and this recognition module extracts a feature to be identified from this specific picture, this feature to be identified of comparison and this nonshrinkly to put and anti-rotational feature, and when this feature to be identified meet this nonshrink put and anti-rotational feature time, judge that this object comes across in this specific picture.
CN201310349397.0A 2013-06-26 2013-08-12 Target Detection And Tracking Method And System Pending CN104252629A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW102122760A TWI514327B (en) 2013-06-26 2013-06-26 Method and system for object detection and tracking
TW102122760 2013-06-26

Publications (1)

Publication Number Publication Date
CN104252629A true CN104252629A (en) 2014-12-31

Family

ID=52187508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310349397.0A Pending CN104252629A (en) 2013-06-26 2013-08-12 Target Detection And Tracking Method And System

Country Status (2)

Country Link
CN (1) CN104252629A (en)
TW (1) TWI514327B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105206055A (en) * 2015-09-24 2015-12-30 深圳市哈工大交通电子技术有限公司 Accident detection method for recognizing vehicle collision through traffic monitoring video
CN106961597A (en) * 2017-03-14 2017-07-18 深圳Tcl新技术有限公司 The target tracking display methods and device of panoramic video
CN108062531A (en) * 2017-12-25 2018-05-22 南京信息工程大学 A kind of video object detection method that convolutional neural networks are returned based on cascade
CN112396631A (en) * 2019-08-16 2021-02-23 纬创资通股份有限公司 Object tracking method and computer system thereof
CN116112782A (en) * 2022-05-25 2023-05-12 荣耀终端有限公司 Video recording method and related device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI636426B (en) * 2017-08-23 2018-09-21 財團法人國家實驗研究院 Method of tracking a person's face in an image
TWI831696B (en) * 2023-05-23 2024-02-01 晶睿通訊股份有限公司 Image analysis method and image analysis apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402551A (en) * 2001-08-07 2003-03-12 三星电子株式会社 Apparatus and method for automatically tracking mobile object
CN102542797A (en) * 2010-12-09 2012-07-04 财团法人工业技术研究院 Image-based traffic parameter detection system and method and computer program product
CN102903122A (en) * 2012-09-13 2013-01-30 西北工业大学 Video object tracking method based on feature optical flow and online ensemble learning
CN103116896A (en) * 2013-03-07 2013-05-22 中国科学院光电技术研究所 Automatic detection tracking method based on visual saliency model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008237771A (en) * 2007-03-28 2008-10-09 Daito Giken:Kk Game stand

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402551A (en) * 2001-08-07 2003-03-12 三星电子株式会社 Apparatus and method for automatically tracking mobile object
CN102542797A (en) * 2010-12-09 2012-07-04 财团法人工业技术研究院 Image-based traffic parameter detection system and method and computer program product
CN102903122A (en) * 2012-09-13 2013-01-30 西北工业大学 Video object tracking method based on feature optical flow and online ensemble learning
CN103116896A (en) * 2013-03-07 2013-05-22 中国科学院光电技术研究所 Automatic detection tracking method based on visual saliency model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JU HAN,KAI-KUANG MA: ""Rotation-invariant and scale-invariant Gabor features for texture image retrieval"", 《IMAGE AND VISION COMPUTING》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105206055A (en) * 2015-09-24 2015-12-30 深圳市哈工大交通电子技术有限公司 Accident detection method for recognizing vehicle collision through traffic monitoring video
CN105206055B (en) * 2015-09-24 2018-09-21 深圳市哈工大交通电子技术有限公司 A kind of accident detection method of Traffic Surveillance Video identification vehicle collision
CN106961597A (en) * 2017-03-14 2017-07-18 深圳Tcl新技术有限公司 The target tracking display methods and device of panoramic video
CN106961597B (en) * 2017-03-14 2019-07-26 深圳Tcl新技术有限公司 The target tracking display methods and device of panoramic video
CN108062531A (en) * 2017-12-25 2018-05-22 南京信息工程大学 A kind of video object detection method that convolutional neural networks are returned based on cascade
CN108062531B (en) * 2017-12-25 2021-10-19 南京信息工程大学 Video target detection method based on cascade regression convolutional neural network
CN112396631A (en) * 2019-08-16 2021-02-23 纬创资通股份有限公司 Object tracking method and computer system thereof
CN112396631B (en) * 2019-08-16 2023-08-25 纬创资通股份有限公司 Object tracking method and computer system thereof
CN116112782A (en) * 2022-05-25 2023-05-12 荣耀终端有限公司 Video recording method and related device
CN116112782B (en) * 2022-05-25 2024-04-02 荣耀终端有限公司 Video recording method and related device

Also Published As

Publication number Publication date
TWI514327B (en) 2015-12-21
TW201501080A (en) 2015-01-01

Similar Documents

Publication Publication Date Title
CN104252629A (en) Target Detection And Tracking Method And System
CN108960211B (en) Multi-target human body posture detection method and system
CN109815906B (en) Traffic sign detection method and system based on step-by-step deep learning
CN104951741A (en) Character recognition method and device thereof
CN111539907B (en) Image processing method and device for target detection
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN102842036A (en) Intelligent multi-target detection method facing ship lock video monitoring
EP2813973A1 (en) Method and system for processing video image
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN103472907A (en) Method and system for determining operation area
CN114219829A (en) Vehicle tracking method, computer equipment and storage device
Min et al. Real‐time face recognition based on pre‐identification and multi‐scale classification
Li et al. License plate detection using convolutional neural network
Pirgazi et al. An End‐to‐End Deep Learning Approach for Plate Recognition in Intelligent Transportation Systems
Tsoukalas et al. Deep learning assisted visual tracking of evader-UAV
WO2018058573A1 (en) Object detection method, object detection apparatus and electronic device
WO2013017184A1 (en) Method and device for video surveillance
CN114495041A (en) Method, device, equipment and medium for measuring distance between vehicle and target object
CN106683113B (en) Feature point tracking method and device
CN114639159A (en) Moving pedestrian detection method, electronic device and robot
Morales Rosales et al. On-road obstacle detection video system for traffic accident prevention
CN113033551A (en) Object detection method, device, equipment and storage medium
Jin et al. A dedicated hardware architecture for real-time auto-focusing using an FPGA
CN110782425A (en) Image processing method, image processing device and electronic equipment
CN114550060A (en) Perimeter intrusion identification method and system and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141231