CN101714256A - Omnibearing vision based method for identifying and positioning dynamic target - Google Patents
Omnibearing vision based method for identifying and positioning dynamic target Download PDFInfo
- Publication number
- CN101714256A CN101714256A CN200910228580A CN200910228580A CN101714256A CN 101714256 A CN101714256 A CN 101714256A CN 200910228580 A CN200910228580 A CN 200910228580A CN 200910228580 A CN200910228580 A CN 200910228580A CN 101714256 A CN101714256 A CN 101714256A
- Authority
- CN
- China
- Prior art keywords
- particle
- image
- target
- omni
- delta
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to an omnibearing vision based method for identifying and positioning a dynamic target, belonging to the technical field of dynamic image analysis. The method comprises the following steps of: 1, acquiring an omnibearing vision sequence image, and preprocessing the omnibearing vision sequence image to obtain a binary image separating a moving target and a background area; 2, searching a local area by an optical flow method, matching feature points between adjacent frames of the image, and detecting the moving target of an image sequence; 3, estimating the moving state of the moving target by a particle filtering algorithm, and predicting the parameter of the moving target in a subsequent frame so as to complete a tracking process. The invention can obviously reduce the calculating amount and enhance the accuracy by identifying and positioning the dynamic target by the method.
Description
Technical field
The invention belongs to the dynamic image analysis technical field, relate to a kind of Target Recognition and localization method based on omni-directional visual.
Background technology
The basic task of dynamic image analysis is to detect movable information, recognition and tracking fortune target from image sequence.It relates to Flame Image Process, graphical analysis, artificial intelligence and pattern-recognition, computer vision etc. and studies carefully the field; it is very active branch in Flame Image Process and the computer vision neighborhood; obtained widespread use in commercial production, the fields such as health, national defense construction for the treatment of, therefore the research to it has crucial sincere justice.
In order to discern moving target and to realize to its tracking, people adopt the method for optical flow field usually, from the image sequence that contains moving target of real-time collection, extract optical flow field, filter out the bigger motion target area of light stream and calculate the velocity of moving target, thereby realized the tracking of moving target.
The object detection method based on light stream in the past mainly is divided into two classes: (1) utilizes the differential optic flow technique promptly to utilize the fundamental equation of light stream, and additional certain constraint obtains fine and close optical flow field, extracts moving target again.The deficiency of the method is that calculated amount is bigger, and real-time is not strong.(2) use the characteristic light stream technology, seek unique point and mate in image, obtain the sparse optical flow field, the real-time of extracting this method of target is improved, but the quantity of information deficiency causes the omission of target easily.And aspect target following, way is in the past usually separated it, and after realize detecting, the feature of based target is followed the tracks of again, does the complexity that has just increased algorithm process like this, brings complicated processing procedure when the entering and withdraw from of target.
Summary of the invention
The objective of the invention is to above-mentioned deficiency at prior art, the present invention proposes a kind of under omni-directional visual the effective ways of maneuvering target recognition and tracking.Real-time and robustness that this method can improve identification and follow the tracks of make the mobile robot have the comprehensive function of continental embankment independent navigation and maneuvering target tracking.
The technical solution used in the present invention is as follows:
A kind of dynamic object identification and localization method based on omni-directional visual comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, this sequence image is carried out pre-service, obtain a moving target and background and make a distinction bianry image;
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame, detect the moving target of image sequence;
Step 3: target state is estimated that the parameter of predicted motion target in subsequent frame finished tracing process by particle filter algorithm.
As preferred implementation, above-mentioned dynamic object identification and localization method based on omni-directional visual, step 2 is wherein carried out according to following method, if the moving image function f (x, y) be continuous function about variable x, y, during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image
t(x, y), when moment t+ Δ t, this this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f
T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, makes f
t(x, y)=f
T+ Δ t(x+ Δ x, y+ Δ y), and make an a=(x, y) in the neighborhood of the M * N that sets, least mean-square error MSE (Δ x, Δ y) minimum, can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y),
Order
Try to achieve Optimum Matching point opt=(Δ x, Δ y)=U
-1V mates by seek unique point in image, detects the moving target of image sequence;
Step 3 is wherein carried out according to following method:
(1) according to the result in second step, initial target is positioned, and obtains the initial motion parameter of target:
P
Init=(P
Init x, P
Init y), establish each particle and represent a kind of possible motion state, getting population is N, the initial weight w of particle
i=1, then have N possible motion state parameters P
i=(P
i X, P
i Y), (i ∈ 1 ... N).
(2) carry out particle resampling process, eliminate the less particle of weights, keep the bigger particle of weights;
(3) change the iterative process of particle filter algorithm over to: from each later frame of second frame, each particle is carried out system state to be shifted and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state, finish tracing process;
Carry out state transitions according to following formula: to particle N
i, P is arranged
i Xt=A
1P
i Xt-1+ B
1w
i T-1And P
i Xt=A
2P
i Xt-1+ B
2w
i T-1, wherein, A
1, A
2, B
1, B
2Be constant, A gets 1, and B is that particle is propagated radius, and W is the random number in [1,1];
Carry out systematic observation according to following method:
(1) after each particle state shifts, utilize new coordinate and, calculate a minimum average B configuration absolute difference function MAD
i
(2) establishing probability density function is
Wherein, σ is a constant, and then the weights of each particle are:
(3) weights to each particle carry out normalized:
(4) further optimal estimation, the posterior probability of establishing the t moment is known, and then tracking parameter P is expressed as:
Afterwards, can make t=t+1 again, return resampling then.
Substantive distinguishing features of the present invention is, at first the omni-directional visual image is carried out pre-service, seeking unique point with optical flow method in image then mates, obtain the sparse optical flow field, at last by the parameter of particle filter predicted motion target in subsequent frame, set up the coupling matrix between consecutive frame, analyze the coupling matrix and judge the moving target state, thus pursuit movement target effectively.Compare with existing method, the method that adopts the present invention to propose can reduce operand significantly and improve accuracy rate.
Description of drawings
Fig. 1 general flow chart that is used for the compound recognition and tracking device of light stream-particle of omni-directional visual environment of the present invention.
Embodiment
Referring to Fig. 1, dynamic object identification and localization method based on omni-directional visual of the present invention comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, image is carried out pre-service, target and background is separated, prepare for follow-up optical flow field calculates.By gauss low frequency filter image is carried out smoothly in advance, carry out the gradient sharpening then, find the movement edge of image object,, carry out Threshold Segmentation in order to cut apart target object and background.At first directly select to determine a threshold value by histogram, take dynamically to adjust threshold value for sequence image, allow each gray values of pixel points of image and this threshold value compare then, if greater than this threshold value, just this gray values of pixel points is changed to 255 (expression backgrounds), otherwise this gray values of pixel points is changed to 0 (object), so just moving target and background has been made a distinction.Just become bianry image through the image of Threshold Segmentation, had only 0 and 255 two kind of gray-scale value.
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame.
For sequence image, the consecutive frame time interval is very little, and spatial point moves little in adjacent two two field pictures, and front and back frame object space correlativity is bigger.
If (x y) is continuous function about variable x, y to the moving image function f.If during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image
t(x, y), when moment t+ Δ t, this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f
T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, allows it and f
t(x, y) equate, promptly
f
t(x,y)=f
t+Δt(x+Δx,y+Δy) (1)
And make an a=(x, y) in the neighborhood of the m * n that sets, least mean-square error MSE (Δ x, Δ y) minimum.
Can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y).
Make that MSE (Δ x, Δ y) is zero to the first order derivative of (Δ x, Δ y):
By (2), can get
Launch with Taylor's formula:
Make f=f
t(x, y)-f
T+ Δ t(x, y)
(5) but abbreviation get
Again because
But the following formula abbreviation is:
Order
Can get Optimum Matching point opt=(Δ x, Δ y)=U
-1V
Mate by in image, seeking unique point, detect the moving target of image sequence.
Step 3: utilize the validity feature of target, by particle filter algorithm target state is estimated, the parameter of predicted motion target in subsequent frame finished tracing process.
At first carry out the particle initialization, the initial target piece is positioned, obtain particle w
kTemplate, as manual initialization, auto-initiation or the like obtains target w afterwards again
kOriginal state, i.e. the state P of its initial time of occurring
Init=(P
Init x, P
Init y), getting population is N (each particle is represented a kind of possible motion state), establishes the initial weight w of particle
i=1, then have N possible motion state parameters P
i=(P
i X, P
i Y) (i ∈ 1 ... N), P wherein
iCan select p
InitPoint in the certain limit on every side.
Capable then particle resampling process is eliminated the less particle of weights, keeps the bigger particle of weights.
At last, preset iterations, change the iterative process of particle filter algorithm over to.From each later frame of second frame, each particle carried out system state shifts and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state.
State transitions: to particle N
i, have
P
i Xt=A
1P
i Xt-1+B
1w
i t-1 (8)
P
i Xt=A
2P
i Xt-1+B
2w
i t-1 (9)
Wherein, A
1, A
2, B
1, B
2Be constant, general A gets 1, and B is that particle is propagated radius (in the system state transfer process, the scope that particle institute can propagate), and w is [1,1] interior random number.
Systematic observation: after each particle state shifts, MAD of the new coordinate Calculation of promptly available correspondence
i, establish probability density function and be
Wherein, σ is a constant, and MAD is a minimum average B configuration absolute difference function.
Then the weights of each particle are:
Normalization:
Further optimal estimation, the posterior probability of establishing the t moment is known, and then tracking parameter P can be expressed as:
Afterwards, can make t=t+1 again, return resampling then.
Claims (5)
1. dynamic object identification and localization method based on an omni-directional visual comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, this sequence image is carried out pre-service, obtain a moving target and background and make a distinction bianry image;
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame, detect the moving target of image sequence;
Step 3: target state is estimated that the parameter of predicted motion target in subsequent frame finished tracing process by particle filter algorithm.
2. dynamic object identification and localization method based on omni-directional visual according to claim 1, step 2 is wherein carried out according to following method, establishes moving image function f (x, y) be continuous function about variable x, y, during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image
t(x, y), when moment t+ Δ t, this this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f
T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, makes f
t(x, y)=f
T+ Δ t(x+ Δ x, y+ Δ y), and make an a=(x, y) in the neighborhood of the M * N that sets, least mean-square error MSE (Δ x, Δ y) minimum, can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y),
3. dynamic object identification and localization method based on omni-directional visual according to claim 1, step 3 is wherein carried out according to following method:
(1) according to the result in second step, initial target is positioned, and obtain the initial motion parameter of target: P
Init=(P
Initx, P
InitY), establish each particle and represent a kind of possible motion state, getting population is N, the initial weight w of particle
i=1, then have N possible motion state parameters P
i=(P
i X, P
i Y), (i ∈ 1 ... N).
(2) carry out particle resampling process, eliminate the less particle of weights, keep the bigger particle of weights;
(3) change the iterative process of particle filter algorithm over to: from each later frame of second frame, each particle is carried out system state to be shifted and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state, finish tracing process.
4. dynamic object identification and localization method based on omni-directional visual according to claim 3 carry out state transitions according to following formula: to particle N
i, P is arranged
i Xt=A
1P
i Xt-1+ B
1w
i T-1And P
i Xt=A
2P
i Xt-1+ B
2w
i T-1, wherein, A
1, A
2, B
1, B
2Be constant, A gets 1, and B is that particle is propagated radius, and w is the random number in [1,1].
5. dynamic object identification and localization method based on omni-directional visual according to claim 4, carry out systematic observation according to following method:
(1) after each particle state shifts, utilize new coordinate and, calculate a minimum average B configuration absolute difference function MAD
i
(2) establishing probability density function is
Wherein, σ is a constant, and then the weights of each particle are:
(3) weights to each particle carry out normalized:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102285809A CN101714256B (en) | 2009-11-13 | 2009-11-13 | Omnibearing vision based method for identifying and positioning dynamic target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102285809A CN101714256B (en) | 2009-11-13 | 2009-11-13 | Omnibearing vision based method for identifying and positioning dynamic target |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101714256A true CN101714256A (en) | 2010-05-26 |
CN101714256B CN101714256B (en) | 2011-12-14 |
Family
ID=42417873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009102285809A Expired - Fee Related CN101714256B (en) | 2009-11-13 | 2009-11-13 | Omnibearing vision based method for identifying and positioning dynamic target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101714256B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102110297A (en) * | 2011-03-02 | 2011-06-29 | 无锡慧眼电子科技有限公司 | Detection method based on accumulated light stream and double-background filtration |
WO2015014111A1 (en) * | 2013-08-01 | 2015-02-05 | 华为技术有限公司 | Optical flow tracking method and apparatus |
CN104778677A (en) * | 2014-01-13 | 2015-07-15 | 联想(北京)有限公司 | Positioning method, device and equipment |
CN105975911A (en) * | 2016-04-28 | 2016-09-28 | 大连民族大学 | Energy perception motion significance target detection algorithm based on filter |
CN106447696A (en) * | 2016-09-29 | 2017-02-22 | 郑州轻工业学院 | Bidirectional SIFT (scale invariant feature transformation) flow motion evaluation-based large-displacement target sparse tracking method |
CN106462960A (en) * | 2014-04-23 | 2017-02-22 | 微软技术许可有限责任公司 | Collaborative alignment of images |
CN106483577A (en) * | 2015-09-01 | 2017-03-08 | 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 | A kind of optical detecting gear |
CN106950985A (en) * | 2017-03-20 | 2017-07-14 | 成都通甲优博科技有限责任公司 | A kind of automatic delivery method and device |
CN107065866A (en) * | 2017-03-24 | 2017-08-18 | 北京工业大学 | A kind of Mobile Robotics Navigation method based on improvement optical flow algorithm |
CN107764271A (en) * | 2017-11-15 | 2018-03-06 | 华南理工大学 | A kind of photopic vision dynamic positioning method and system based on light stream |
CN108053446A (en) * | 2017-12-11 | 2018-05-18 | 北京奇虎科技有限公司 | Localization method, device and electronic equipment based on cloud |
CN108920997A (en) * | 2018-04-10 | 2018-11-30 | 国网浙江省电力有限公司信息通信分公司 | Judge that non-rigid targets whether there is the tracking blocked based on profile |
CN109255329A (en) * | 2018-09-07 | 2019-01-22 | 百度在线网络技术(北京)有限公司 | Determine method, apparatus, storage medium and the terminal device of head pose |
CN111147763A (en) * | 2019-12-29 | 2020-05-12 | 眸芯科技(上海)有限公司 | Image processing method based on gray value and application |
CN111951949A (en) * | 2020-01-21 | 2020-11-17 | 梅里医疗科技(洋浦)有限责任公司 | Intelligent nursing interaction system for intelligent ward |
CN114347030A (en) * | 2022-01-13 | 2022-04-15 | 中通服创立信息科技有限责任公司 | Robot vision following method and vision following robot |
CN115962783A (en) * | 2023-03-16 | 2023-04-14 | 太原理工大学 | Positioning method of cutting head of heading machine and heading machine |
-
2009
- 2009-11-13 CN CN2009102285809A patent/CN101714256B/en not_active Expired - Fee Related
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102110297B (en) * | 2011-03-02 | 2012-10-10 | 无锡慧眼电子科技有限公司 | Detection method based on accumulated light stream and double-background filtration |
CN102110297A (en) * | 2011-03-02 | 2011-06-29 | 无锡慧眼电子科技有限公司 | Detection method based on accumulated light stream and double-background filtration |
WO2015014111A1 (en) * | 2013-08-01 | 2015-02-05 | 华为技术有限公司 | Optical flow tracking method and apparatus |
US9536147B2 (en) | 2013-08-01 | 2017-01-03 | Huawei Technologies Co., Ltd. | Optical flow tracking method and apparatus |
CN104778677A (en) * | 2014-01-13 | 2015-07-15 | 联想(北京)有限公司 | Positioning method, device and equipment |
CN106462960A (en) * | 2014-04-23 | 2017-02-22 | 微软技术许可有限责任公司 | Collaborative alignment of images |
CN106483577A (en) * | 2015-09-01 | 2017-03-08 | 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 | A kind of optical detecting gear |
CN105975911B (en) * | 2016-04-28 | 2019-04-19 | 大连民族大学 | Energy-aware based on filter moves well-marked target detection method |
CN105975911A (en) * | 2016-04-28 | 2016-09-28 | 大连民族大学 | Energy perception motion significance target detection algorithm based on filter |
CN106447696A (en) * | 2016-09-29 | 2017-02-22 | 郑州轻工业学院 | Bidirectional SIFT (scale invariant feature transformation) flow motion evaluation-based large-displacement target sparse tracking method |
CN106447696B (en) * | 2016-09-29 | 2017-08-25 | 郑州轻工业学院 | A kind of big displacement target sparse tracking that locomotion evaluation is flowed based on two-way SIFT |
CN106950985A (en) * | 2017-03-20 | 2017-07-14 | 成都通甲优博科技有限责任公司 | A kind of automatic delivery method and device |
CN107065866A (en) * | 2017-03-24 | 2017-08-18 | 北京工业大学 | A kind of Mobile Robotics Navigation method based on improvement optical flow algorithm |
CN107764271A (en) * | 2017-11-15 | 2018-03-06 | 华南理工大学 | A kind of photopic vision dynamic positioning method and system based on light stream |
CN107764271B (en) * | 2017-11-15 | 2023-09-26 | 华南理工大学 | Visible light visual dynamic positioning method and system based on optical flow |
CN108053446A (en) * | 2017-12-11 | 2018-05-18 | 北京奇虎科技有限公司 | Localization method, device and electronic equipment based on cloud |
CN108920997A (en) * | 2018-04-10 | 2018-11-30 | 国网浙江省电力有限公司信息通信分公司 | Judge that non-rigid targets whether there is the tracking blocked based on profile |
CN109255329A (en) * | 2018-09-07 | 2019-01-22 | 百度在线网络技术(北京)有限公司 | Determine method, apparatus, storage medium and the terminal device of head pose |
CN111147763A (en) * | 2019-12-29 | 2020-05-12 | 眸芯科技(上海)有限公司 | Image processing method based on gray value and application |
CN111951949A (en) * | 2020-01-21 | 2020-11-17 | 梅里医疗科技(洋浦)有限责任公司 | Intelligent nursing interaction system for intelligent ward |
CN111951949B (en) * | 2020-01-21 | 2021-11-09 | 武汉博科国泰信息技术有限公司 | Intelligent nursing interaction system for intelligent ward |
CN114347030A (en) * | 2022-01-13 | 2022-04-15 | 中通服创立信息科技有限责任公司 | Robot vision following method and vision following robot |
CN115962783A (en) * | 2023-03-16 | 2023-04-14 | 太原理工大学 | Positioning method of cutting head of heading machine and heading machine |
Also Published As
Publication number | Publication date |
---|---|
CN101714256B (en) | 2011-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101714256B (en) | Omnibearing vision based method for identifying and positioning dynamic target | |
CN108665481B (en) | Self-adaptive anti-blocking infrared target tracking method based on multi-layer depth feature fusion | |
EP3633615A1 (en) | Deep learning network and average drift-based automatic vessel tracking method and system | |
CN102385690B (en) | Target tracking method and system based on video image | |
Nieto et al. | Road environment modeling using robust perspective analysis and recursive Bayesian segmentation | |
CN102663429B (en) | Method for motion pattern classification and action recognition of moving target | |
CN112016445B (en) | Monitoring video-based remnant detection method | |
CN102903122B (en) | Video object tracking method based on feature optical flow and online ensemble learning | |
CN109191497A (en) | A kind of real-time online multi-object tracking method based on much information fusion | |
CN106600625A (en) | Image processing method and device for detecting small-sized living thing | |
CN105023278A (en) | Movable target tracking method and system based on optical flow approach | |
CN103971386A (en) | Method for foreground detection in dynamic background scenario | |
CN104036523A (en) | Improved mean shift target tracking method based on surf features | |
Elmezain et al. | Hand trajectory-based gesture spotting and recognition using HMM | |
CN106682573B (en) | A kind of pedestrian tracting method of single camera | |
CN110991397B (en) | Travel direction determining method and related equipment | |
CN103996292A (en) | Moving vehicle tracking method based on corner matching | |
Nayagam et al. | A survey on real time object detection and tracking algorithms | |
Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
CN110309729A (en) | Tracking and re-detection method based on anomaly peak detection and twin network | |
Kandil et al. | A comparative study between SIFT-particle and SURF-particle video tracking algorithms | |
CN109636834A (en) | Video frequency vehicle target tracking algorism based on TLD innovatory algorithm | |
CN117173792A (en) | Multi-person gait recognition system based on three-dimensional human skeleton | |
CN111862147A (en) | Method for tracking multiple vehicles and multiple human targets in video | |
CN115188081B (en) | Complex scene-oriented detection and tracking integrated method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111214 Termination date: 20141113 |
|
EXPY | Termination of patent right or utility model |