CN109712170A - Environmental objects method for tracing, device, computer equipment and storage medium - Google Patents
Environmental objects method for tracing, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109712170A CN109712170A CN201811608710.7A CN201811608710A CN109712170A CN 109712170 A CN109712170 A CN 109712170A CN 201811608710 A CN201811608710 A CN 201811608710A CN 109712170 A CN109712170 A CN 109712170A
- Authority
- CN
- China
- Prior art keywords
- feature
- point feature
- point
- line
- environmental objects
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
This application involves a kind of environmental objects method for tracing, device, computer equipment and storage mediums.The described method includes: obtaining environmental objects image, and in the environmental objects image, object point feature and object line feature are extracted;Point feature quantity is counted, and, count line feature quantity;According to the point feature quantity, point feature weight and line feature weight are determined;According to the point feature weight and the line feature weight, adjustment is weighted to the point feature quantity and the line feature quantity, obtains weighted point feature quantity and weighting line feature quantity;The target point feature for meeting the weighted point feature quantity is extracted from the object point feature, and, the score feature for meeting the weighting line feature quantity is extracted from the object line feature;According to the target point feature and the score feature, environmental objects are tracked.Motion tracking order of accuarcy can be improved using this method.
Description
Technical field
This application involves motion tracking technical fields, more particularly to a kind of environmental objects method for tracing, device, computer
Equipment and storage medium.
Background technique
Vision inertia odometer (visual-inertial odometry, VIO) is a kind of fusion vision data and inertia
Measuring unit (Inertial measurement unit, IMU) data and realize synchronous positioning and build figure (simultaneous
Localization and mapping, SLAM) algorithm.Vision inertia odometer is carrying out motion tracking using vision data
While, Inertial Measurement Unit data are merged, the performance of motion tracking is improved, is a kind of motion tracking of low-cost and high-performance
Algorithm.Therefore, vision inertia odometer is usually used in automatic driving, virtually enhancing technology, augmented reality and machine
The fields such as device people navigation.
However, the environmental objects method for tracing that existing vision inertia odometer provides is moved using vision data
When tracking, only using the point feature in visual pattern, thus in illumination variation, the environment that texture lacks or point feature is seldom
Under motion tracking it is ineffective;Simultaneously when point feature quantity is more, the computation burden that will lead to system is larger, and then also can
Influence the accuracy of motion tracking.
Therefore, existing environmental objects method for tracing has that tracking order of accuarcy is not high.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of environment that can be improved motion tracking order of accuarcy
Object tracking method, device, computer equipment and storage medium.
A kind of environmental objects method for tracing, which comprises
Environmental objects image is obtained, and in the environmental objects image, extracts object point feature and object line feature;
The point feature quantity of the environment point feature is counted, and, count the line feature quantity of the environment line feature;
According to the point feature quantity, the point feature weight of the point feature quantity and the line of the line feature quantity are determined
Feature weight;
According to the point feature weight and the line feature weight, to the point feature quantity and the line feature quantity into
Row weighting adjustment, obtains weighted point feature quantity and weighting line feature quantity;
The target point feature for meeting the weighted point feature quantity is extracted from the object point feature, and, from institute
State the score feature for extracting in object line feature and meeting the weighting line feature quantity;
According to the target point feature and the score feature, environmental objects are tracked;The environmental objects are used for for being
System carries out motion tracking to moving object, and determines the corresponding pose of the moving object.
It is described according to the point feature quantity in one of the embodiments, determine the point feature of the point feature quantity
The line feature weight of weight and the line feature quantity, comprising:
Obtain the reference characteristic quantity of the object point feature;
Calculate the point feature ratio of the point feature quantity Yu the reference characteristic quantity;
According to the point feature ratio, the point feature weight is determined;
According to the point feature weight, the line feature weight is calculated.
In one of the embodiments, further include:
Judge whether the environmental objects image is key frame;
When the environmental objects image be key frame, determine the environmental objects image be key frame images;
Judge the key frame images with the presence or absence of winding;
When there are winding, the corresponding optimization poses of the upper key frame images of generation for the key frame images.
It is described when there are winding, the upper key frame images of generation for the key frame images in one of the embodiments,
Corresponding optimization pose, comprising:
Obtain the corresponding history pose of a upper key frame images;
The corresponding Inertia information of a upper key frame images is obtained, and according to the Inertia information to the history bit
Appearance is updated, and generates the optimization pose.
In one of the embodiments, further include:
The weighted point feature quantity is added with the weighting line feature quantity, obtains feature total amount;
When the feature total amount is less than preset amount threshold, the corresponding optimization position of the environmental objects image is generated
Appearance.
The extraction object point feature in one of the embodiments, comprising:
Obtain preset segmentation scale;
According to the segmentation scale, the environmental objects image is divided, sub-image area is obtained;
The point feature for extracting object in the sub-image area obtains subgraph point feature;
According to the subgraph point feature, the object point feature is generated.
The Inertia information includes: in acceleration, angular speed, offset and noise item in one of the embodiments,
It is at least one.
A kind of environmental objects follow-up mechanism, described device include:
Obtain module, for obtaining environmental objects image, and in the environmental objects image, extract object point feature and
Object line feature;
Statistical module, for counting the point feature quantity of the environment point feature, and, count the environment line feature
Line feature quantity;
Weight determining module, for according to the point feature quantity, determine the point feature quantity point feature weight and
The line feature weight of the line feature quantity;
Weighting adjustment module, is used for according to the point feature weight and the line feature weight, to the point feature quantity
It is weighted adjustment with the line feature quantity, obtains weighted point feature quantity and weighting line feature quantity;
Extraction module, for extracting the target point spy for meeting the weighted point feature quantity from the object point feature
Sign, and, the score feature for meeting the weighting line feature quantity is extracted from the object line feature;
Tracing module, for tracking environmental objects according to the target point feature and the score feature;The environment
Object is used to carry out motion tracking to moving object for system, and determines the corresponding pose of the moving object.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device performs the steps of when executing the computer program
Environmental objects image is obtained, and in the environmental objects image, extracts object point feature and object line feature;
The point feature quantity of the environment point feature is counted, and, count the line feature quantity of the environment line feature;
According to the point feature quantity, the point feature weight of the point feature quantity and the line of the line feature quantity are determined
Feature weight;
According to the point feature weight and the line feature weight, to the point feature quantity and the line feature quantity into
Row weighting adjustment, obtains weighted point feature quantity and weighting line feature quantity;
The target point feature for meeting the weighted point feature quantity is extracted from the object point feature, and, from institute
State the score feature for extracting in object line feature and meeting the weighting line feature quantity;
According to the target point feature and the score feature, environmental objects are tracked;The environmental objects are used for for being
System carries out motion tracking to moving object, and determines the corresponding pose of the moving object.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
It is performed the steps of when row
Environmental objects image is obtained, and in the environmental objects image, extracts object point feature and object line feature;
The point feature quantity of the environment point feature is counted, and, count the line feature quantity of the environment line feature;
According to the point feature quantity, the point feature weight of the point feature quantity and the line of the line feature quantity are determined
Feature weight;
According to the point feature weight and the line feature weight, to the point feature quantity and the line feature quantity into
Row weighting adjustment, obtains weighted point feature quantity and weighting line feature quantity;
The target point feature for meeting the weighted point feature quantity is extracted from the object point feature, and, from institute
State the score feature for extracting in object line feature and meeting the weighting line feature quantity;
According to the target point feature and the score feature, environmental objects are tracked;The environmental objects are used for for being
System carries out motion tracking to moving object, and determines the corresponding pose of the moving object.Above-mentioned environmental objects method for tracing, dress
It sets, computer equipment and storage medium, by obtaining environmental objects image, and in environmental objects image, it is special to extract object-point
Object line of seeking peace feature;Adjustment is weighted to object point feature and object line feature, score feature is obtained and score is special
Sign;By being tracked to score feature and score feature, determines the corresponding pose of environmental objects image, realize that movement chases after
Track.By being tracked to score feature, the fortune under illumination variation, the environment that texture lacks or point feature is seldom is improved
Dynamic tracking effect.Simultaneously when point feature quantity is more, by weighed value adjusting point feature quantity, the computation burden of system is reduced, is mentioned
The accuracy of height tracking.It is not high to solve the problems, such as that existing environmental objects method for tracing has tracking order of accuarcy.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of environmental objects method for tracing in one embodiment;
Fig. 2 is a kind of structural block diagram of environmental objects follow-up mechanism in one embodiment;
Fig. 3 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
In one embodiment, as shown in Figure 1, providing a kind of environmental objects method for tracing, this method includes following step
It is rapid:
Step 210, environmental objects image is obtained, and in environmental objects image, extracts object point feature and object line is special
Sign.
Wherein, environmental objects image can refer to the image with external environment.In practical applications, camera can be passed through
External environment is shot to obtain environmental objects image, environmental objects image includes the tracking object for motion tracking.
Wherein, object point feature can be the corner feature referred in tracking object.
Wherein, object line feature can refer to the line segment feature in tracking object.
In the specific implementation, camera shoots external environment, at least one frame of environmental objects image is obtained;Then, it obtains
Frame environmental objects image therein is taken, environmental objects image is put into sliding window, feature knowledge is carried out to environmental objects image
Not.Specifically, by using Fast (Features from Accelerated Segment Test, a kind of Corner Detection calculation
Method) object point feature in detector extraction environment object images.Use LSD (Line Segment Detector, Yi Zhongzhi
Line Segment Detection Algorithm) object line feature in Line Segment Detection Algorithm extraction environment object images.Furthermore it is also possible to not using scale
Become eigentransformation algorithm (Scale-invariant feature transform, SIFT) to the object in environmental objects image
Point feature extracts.
Step 220, the point feature quantity of statistical environment point feature, and, the line feature quantity of statistical environment line feature.
In the specific implementation, counting the point of border point feature after sliding window extracts object point feature and object line feature
Feature quantity, and, the line feature quantity of statistical environment line feature.
Step 230, according to point feature quantity, the point feature weight of point feature quantity and the line feature of line feature quantity are determined
Weight.
In the specific implementation, after getting point feature quantity and line feature quantity, according to point feature quantity and priori point
The specific gravity of the quantity of feature determines point feature weight.And according to point feature weight, the line feature power of outlet feature quantity is calculated
Value.Wherein, the sum of point feature weight and line feature weight are 1.
Step 240, according to point feature weight and line feature weight, tune is weighted to point feature quantity and line feature quantity
It is whole, obtain weighted point feature quantity and weighting line feature quantity.
In the specific implementation, point of use feature weight is to point feature number obtaining the sum of point feature weight and line feature weight
Amount is weighted adjustment, obtains weighted point feature quantity;Meanwhile tune is weighted to line feature quantity using line feature weight
It is whole, obtain weighting line feature quantity.Realization is weighted adjustment to point feature quantity and line feature quantity.
Step 250, the target point feature for meeting weighted point feature quantity is extracted from object point feature, and, from right
Meet the score feature of weighting line feature quantity as extracting in line feature.
In the specific implementation, obtaining after obtaining weighted point feature quantity and weighting line feature quantity from object point feature
In extract the target point feature for meeting weighted point feature quantity, and, extract that meet weighting line special from object line feature
Levy the score feature of quantity.
Step 260, according to target point feature and score feature, environmental objects are tracked;Environmental objects are used for for system pair
Moving object carries out motion tracking, and determines the corresponding pose of moving object.
Wherein, pose can refer to the position of moving object and the posture of moving object.
In the specific implementation, after extracting target point feature and extracting score feature, according to target point feature and target
Line feature tracks environmental objects;Specifically, target point feature and score feature are tracked respectively, by using KLT
(Kanade-Lucas-Tomasi Tracking Method, light stream back tracking method) tracing algorithm is tracked target point feature.
Meanwhile LBD (Line Band Descriptor, lines band descriptor) descriptor by extracting line feature, according to utilizing LBD
Descriptor is tracked score feature, thus the variation according to target point feature and score feature in time-domain, with
And target point feature and score feature in previous frame with corresponding relationship existing between present frame, calculate adjacent two frames ring
The motion information of border object images, so that it is determined that the corresponding pose of current environment object images.
In above-mentioned environmental objects method, by obtaining environmental objects image, and in environmental objects image, object-point is extracted
Feature and object line feature;Adjustment is weighted to object point feature and object line feature, obtains score feature and score
Feature;By being tracked to score feature and score feature, determines the corresponding pose of environmental objects image, realize movement
Tracking.By being tracked to score feature, improve under illumination variation, the environment that texture lacks or point feature is seldom
Motion tracking effect.Simultaneously when point feature quantity is more, by weighed value adjusting point feature quantity, the computation burden of system is reduced,
Improve the accuracy of tracking.It is not high to solve the problems, such as that existing environmental objects method for tracing has tracking order of accuarcy.
In another embodiment, according to point feature quantity, the point feature weight and line characteristic of point feature quantity are determined
The line feature weight of amount, comprising:
Obtain the reference characteristic quantity of object point feature;Calculate the point feature ratio of point feature quantity and reference characteristic quantity
Value;According to point feature ratio, point feature weight is determined;According to point feature weight, line feature weight is calculated.
In the specific implementation, being set according to different scenes to reference characteristic quantity, for example, indoors by benchmark in environment
Feature quantity is set as 300.Then, the point feature ratio for calculating point feature quantity and reference characteristic quantity, according to point feature ratio
The numberical range of value determines point feature weight.More specifically, point feature weight s1Value range are as follows:
Wherein, s1For point feature weight, Ti pFor point feature quantity, T is benchmark feature quantity,For point feature ratio.
For example, point feature ratio is greater than 0.8, therefore point feature weight at this time when the point feature ratio being calculated is 0.9
It is 0.6.After point feature weight is calculated, according to point feature weight, line feature weight is calculated.Line feature weight s2Meter
Calculate formula are as follows:
s2=1-s1;
Wherein, s2For line feature weight, s1For point feature weight.
In addition, after line feature weight is calculated, according to point feature weight and line feature weight, to point feature quantity
It is weighted adjustment with line feature quantity, obtains weighted point feature quantity and weighting line feature quantity.To weighted point feature quantity
It is added with weighting line feature quantity, obtains feature total quantity.Feature total quantity TiFormula are as follows:
Ti=s1Ti p+s2Ti l;
Wherein, Ti lFor line feature quantity, TiIt is characterized total quantity, s1Ti pFor weighted point feature quantity, s2Ti lTo weight line
Feature quantity.
The technical solution of the present embodiment, by the reference characteristic quantity for obtaining object point feature;And calculate point feature quantity
With the point feature ratio of reference characteristic quantity;Then, according to point feature ratio, point feature weight is determined;Finally, further according to a spy
Weight is levied, line feature weight is calculated;By measuring point feature quantity at this time according to point feature ratio, and to point feature quantity
Corresponding value adjustment is carried out with line feature quantity, and then in limited features total quantity, the quantity and line feature to point feature
Quantity is weighed, and the computation burden for reducing system simultaneously, is avoided and impacted to the accuracy of motion tracking.
In another embodiment, further includes: judge whether environmental objects image is key frame;When environmental objects image is
Key frame determines that environmental objects image is key frame images;Judge key frame images with the presence or absence of winding;When key frame images are deposited
In winding, the corresponding optimization pose of a upper key frame images is generated.
In the specific implementation, using the first frame environmental objects image of camera acquisition as initial key frame, it will be from environmental objects
Frame number difference conduct between the image zooming-out feature total quantity arrived and current environment object images and key frame images before
Judgment criteria judges whether environmental objects image is key frame.Specifically, when from environmental objects image zooming-out to feature sum
Amount is greater than preset characteristic threshold value, and when the frame number difference of environmental objects image and key frame images before meet it is preset
Difference range determines that current environment object images are key frame images.Then, DBoW2 (Bags of is used to key frame images
Binary words for fast place recognition in image sequence, a kind of bag of words algorithm) it calculates
Method carries out winding detection, matches to the feature in identification current key frame image with the feature in preceding key frame image,
Judge that whether corresponding with preceding key frame image the corresponding position of current key frame image position be identical, i.e., key frame images are
It is no that there are winding.When there are winding for key frame images.Sliding window is positioned into a supreme key frame images, to a upper key
The pose of frame image previous prediction optimizes update, generates the corresponding optimization pose of a upper key frame images.In addition, when closing
When winding is not present in key frame image, the feature descriptor of key frame images, and the number measured according to Inertial Measurement Unit are obtained
According to obtaining the three-dimensional space position of moving object, according to key frame images with by above-mentioned feature descriptor and above-mentioned three-dimensional space
Between mapping relations between position, establish key frame image data library.
The technical solution of the present embodiment, by judging whether environmental objects image is key frame;When environmental objects image is
Key frame determines that environmental objects image is key frame images;And judge key frame images with the presence or absence of winding;Work as key frame images
There are winding, optimize to the corresponding pose of key frame images where winding, reduce system and moved using image
Error is generated when tracking, improves the accuracy that motion tracking is carried out to moving object.
In another embodiment, when there are winding, the corresponding optimizations of the upper key frame images of generation for key frame images
Pose, comprising:
Obtain the corresponding history pose of a key frame images;Obtain the corresponding inertia letter of a key frame images
Breath, and history pose is updated according to Inertia information, generate optimization pose.
Wherein, history pose can be the fortune for referring to and the estimation of VO (visual odometry, vision odometry) algorithm being used only
Animal posture.
Wherein, Inertia information can refer to the information measured by Inertial Measurement Unit.
In the specific implementation, sliding window is repositioned onto a upper key frame figure when key frame images are there are when winding
Picture, and obtain from above-mentioned key frame data library the Inertia information of a upper key frame images.According to above-mentioned Inertia information
EFK (Extended Kalman Filter, Extended Kalman filter are carried out to the corresponding history pose of a upper key frame images
Device) it updates, specifically, history pose and Inertia information are carried out by loose coupling by using EKF algorithm, to the pose of moving object
It is calculated, the optimization pose after being optimized.Optimization pose after optimization is more accurate compared to history pose.
The technical solution of the present embodiment, by obtaining the corresponding history pose of a upper key frame images;And obtain upper one
Then the corresponding Inertia information of a key frame images is updated history pose according to Inertia information, generate optimization pose.
Inertia information and visual information are merged to realize, improve the order of accuarcy of optimization pose, system is reduced and utilizes
Image carries out generating error when motion tracking, and then improves the accuracy that motion tracking is carried out to moving object.
In another embodiment, further includes: weighted point feature quantity is added with weighting line feature quantity, is obtained
Feature total amount;When feature total amount is less than preset amount threshold, the corresponding optimization pose of build environment object images.
In the specific implementation, being added to weighted point feature quantity with weighting line feature quantity, feature total amount is obtained;Work as spy
When levying total amount less than preset amount threshold, the corresponding optimization pose of build environment object images;Specifically, sliding window is salty
New definition to environmental objects image, and from above-mentioned key frame data library obtain environmental objects image Inertia information.According to
Above-mentioned Inertia information carries out EFK update to the corresponding history pose of environmental objects image, more specifically, being calculated by using EKF
History pose and Inertia information are carried out loose coupling by method, are calculated the pose of moving object, the optimization position after being optimized
Appearance.In addition, determining that environmental objects image is key frame images.
The technical solution of the present embodiment, when detect the weighting point feature in environmental objects image and weight line feature it is total
When quantity, current Inertia information is merged, the corresponding pose of environmental objects image is updated, generates optimization pose;To
Moving object is avoided by illumination effect or movement too fast the case where causing motion tracking to fail, is improved to moving object
The order of accuarcy of body progress motion tracking.
In another embodiment, object point feature is extracted, comprising:
Obtain preset segmentation scale;According to segmentation scale, environmental objects image is divided, sub-image regions are obtained
Domain;The point feature for extracting object in sub-image area, obtains subgraph point feature;According to subgraph point feature, spanning subgraph picture
Point feature
Wherein, segmentation scale can refer to the scale that image segmentation is carried out to a frame environmental objects image.
In the specific implementation, preset segmentation scale is obtained, for example, segmentation scale can be 64 × 48.According to the segmentation ruler
Degree, divides environmental objects image, obtains several sub-image areas that resolution ratio is 64 × 48.Then, to each
Sub-image area extracts the point feature in sub-image area using Fast detector, obtains subgraph point feature.Finally, by above-mentioned
Subgraph point feature carry out data summarization, generate object point feature.
The technical solution of the present embodiment, by the way that image point will be carried out according to preset segmentation scale to environmental objects image
It cuts, obtains several sub-image areas.Then, the identification for then to sub-image area carrying out point feature, obtains subgraph point feature.
Finally, according to subgraph point feature, spanning subgraph picture point feature.Thus, the uniformity coefficient of system identification point feature is improved, and
Point feature is tracked with system, to improve the order of accuarcy that system carries out motion tracking to moving object.
In another embodiment, Inertia information includes: at least one in acceleration, angular speed, offset and noise item
Kind.
In the specific implementation, by the detection data for obtaining Inertial Measurement Unit, according to above-mentioned detection data, by using
At least one of acceleration, angular speed, offset and noise item is calculated in pre-integration model.
The technical solution of the present embodiment, Inertia information include: in acceleration, angular speed, offset and noise item at least
One kind carrying out EKF to by the pose being previously calculated by using data such as acceleration, angular speed, offset and noise items
Optimization, thus improve system-computed go out optimize pose order of accuarcy, and then improve system to moving object campaign with
The performance of track.
It should be understood that although each step in the flow chart of Fig. 1 is successively shown according to the instruction of arrow, this
A little steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these steps
It executes there is no the limitation of stringent sequence, these steps can execute in other order.Moreover, at least part in Fig. 1
Step may include that perhaps these sub-steps of multiple stages or stage are executed in synchronization to multiple sub-steps
It completes, but can execute at different times, the execution sequence in these sub-steps or stage, which is also not necessarily, successively to be carried out,
But it can be executed in turn or alternately at least part of the sub-step or stage of other steps or other steps.
In one embodiment, as shown in Fig. 2, providing a kind of environmental objects follow-up mechanism, comprising:
Module 310 is obtained, for obtaining environmental objects image, and in the environmental objects image, it is special to extract object-point
Object line of seeking peace feature;
Statistical module 320, for counting the point feature quantity of the environment point feature, and, it is special to count the environment line
The line feature quantity of sign;
Weight determining module 330, for determining the point feature weight of the point feature quantity according to the point feature quantity
With the line feature weight of the line feature quantity;
Weighting adjustment module 340, is used for according to the point feature weight and the line feature weight, to the point feature number
Amount and the line feature quantity are weighted adjustment, obtain weighted point feature quantity and weighting line feature quantity;
Extraction module 350, for extracting the target for meeting the weighted point feature quantity from the object point feature
Point feature, and, the score feature for meeting the weighting line feature quantity is extracted from the object line feature;
Tracing module 360, for tracking environmental objects according to the target point feature and the score feature;It is described
Environmental objects are used to carry out motion tracking to moving object for system, and determine the corresponding pose of the moving object.
In one embodiment, above-mentioned weight determining module 330, comprising:
Reference characteristic acquisition submodule, for obtaining the reference characteristic quantity of the object point feature;Ratio calculation submodule
Block, for calculating the point feature ratio of the point feature quantity Yu the reference characteristic quantity;Point feature weight submodule, is used for
According to the point feature ratio, the point feature weight is determined;Line feature weight submodule, for being weighed according to the point feature
Value, calculates the line feature weight.
In one embodiment, above-mentioned a kind of environmental objects follow-up mechanism, further includes:
Key frame judgment module, for judging whether the environmental objects image is key frame;Key frame determining module is used
In being key frame when the environmental objects image, determine that the environmental objects image is key frame images;Winding judgment module is used
In judging the key frame images with the presence or absence of winding;First optimization module, for there are winding, lifes when the key frame images
The corresponding optimization pose of Cheng Shangyi key frame images.
In one embodiment, the first above-mentioned optimization module, comprising:
First acquisition submodule, for obtaining the corresponding history pose of a upper key frame images;Submodule is generated,
For obtaining the corresponding Inertia information of a upper key frame images, and according to the Inertia information to the history pose into
Row updates, and generates the optimization pose.
In one embodiment, above-mentioned a kind of environmental objects follow-up mechanism, further includes:
Feature total amount computing module, for carrying out phase to the weighted point feature quantity and the weighting line feature quantity
Add, obtains feature total amount;Second optimization module is used for when the feature total amount is less than preset amount threshold, described in generation
The corresponding optimization pose of environmental objects image.
In one embodiment, above-mentioned acquisition module 310, comprising:
Second acquisition submodule, for obtaining preset segmentation scale;Image divides submodule, for according to the segmentation
Scale divides the environmental objects image, obtains sub-image area;Extracting sub-module, for extracting the subgraph
The point feature of object in region, obtains subgraph point feature;Submodule is generated, for generating according to the subgraph point feature
The object point feature.
Specific about environmental objects follow-up mechanism limits the limit that may refer to above for environmental objects method for tracing
Fixed, details are not described herein.Modules in above-mentioned environmental objects follow-up mechanism can fully or partially through software, hardware and its
Combination is to realize.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with
It is stored in the memory in computer equipment in a software form, in order to which processor calls the above modules of execution corresponding
Operation.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 3.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is used for storage environment object tracing data.The network interface of the computer equipment is used for and external terminal
It is communicated by network connection.To realize a kind of environmental objects method for tracing when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 3, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory
Computer program, the processor perform the steps of when executing computer program
Step 210, environmental objects image is obtained, and in the environmental objects image, extracts object point feature and object
Line feature;
Step 220, the point feature quantity of the environment point feature is counted, and, the line for counting the environment line feature is special
Levy quantity;
Step 230, according to the point feature quantity, determine the point feature quantity point feature weight and the line feature
The line feature weight of quantity;
Step 240, special to the point feature quantity and the line according to the point feature weight and the line feature weight
Sign quantity is weighted adjustment, obtains weighted point feature quantity and weighting line feature quantity;
Step 250, the target point feature for meeting the weighted point feature quantity is extracted from the object point feature, with
And the score feature for meeting the weighting line feature quantity is extracted from the object line feature;
Step 260, according to the target point feature and the score feature, environmental objects are tracked;The environmental objects
For carrying out motion tracking to moving object for system, and determine the corresponding pose of the moving object.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain the reference characteristic quantity of the object point feature;
Calculate the point feature ratio of the point feature quantity Yu the reference characteristic quantity;
According to the point feature ratio, the point feature weight is determined;
According to the point feature weight, the line feature weight is calculated.
In one embodiment, it is also performed the steps of when processor executes computer program
Judge whether the environmental objects image is key frame;
When the environmental objects image be key frame, determine the environmental objects image be key frame images;
Judge the key frame images with the presence or absence of winding;
When there are winding, the corresponding optimization poses of the upper key frame images of generation for the key frame images.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain the corresponding history pose of a upper key frame images;
The corresponding Inertia information of a upper key frame images is obtained, and according to the Inertia information to the history bit
Appearance is updated, and generates the optimization pose.
In one embodiment, it is also performed the steps of when processor executes computer program
The weighted point feature quantity is added with the weighting line feature quantity, obtains feature total amount;
When the feature total amount is less than preset amount threshold, the corresponding optimization position of the environmental objects image is generated
Appearance.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain preset segmentation scale;
According to the segmentation scale, the environmental objects image is divided, sub-image area is obtained;
The point feature for extracting object in the sub-image area obtains subgraph point feature;
According to the subgraph point feature, the object point feature is generated.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
Step 210, environmental objects image is obtained, and in the environmental objects image, extracts object point feature and object
Line feature;
Step 220, the point feature quantity of the environment point feature is counted, and, the line for counting the environment line feature is special
Levy quantity;
Step 230, according to the point feature quantity, determine the point feature quantity point feature weight and the line feature
The line feature weight of quantity;
Step 240, special to the point feature quantity and the line according to the point feature weight and the line feature weight
Sign quantity is weighted adjustment, obtains weighted point feature quantity and weighting line feature quantity;
Step 250, the target point feature for meeting the weighted point feature quantity is extracted from the object point feature, with
And the score feature for meeting the weighting line feature quantity is extracted from the object line feature;
Step 260, according to the target point feature and the score feature, environmental objects are tracked;The environmental objects
For carrying out motion tracking to moving object for system, and determine the corresponding pose of the moving object.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain the reference characteristic quantity of the object point feature;
Calculate the point feature ratio of the point feature quantity Yu the reference characteristic quantity;
According to the point feature ratio, the point feature weight is determined;
According to the point feature weight, the line feature weight is calculated.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Judge whether the environmental objects image is key frame;
When the environmental objects image be key frame, determine the environmental objects image be key frame images;
Judge the key frame images with the presence or absence of winding;
When there are winding, the corresponding optimization poses of the upper key frame images of generation for the key frame images.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain the corresponding history pose of a upper key frame images;
The corresponding Inertia information of a upper key frame images is obtained, and according to the Inertia information to the history bit
Appearance is updated, and generates the optimization pose.
In one embodiment, it is also performed the steps of when computer program is executed by processor
The weighted point feature quantity is added with the weighting line feature quantity, obtains feature total amount;
When the feature total amount is less than preset amount threshold, the corresponding optimization position of the environmental objects image is generated
Appearance.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain preset segmentation scale;
According to the segmentation scale, the environmental objects image is divided, sub-image area is obtained;
The point feature for extracting object in the sub-image area obtains subgraph point feature;
According to the subgraph point feature, the object point feature is generated.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of environmental objects method for tracing, which comprises
Environmental objects image is obtained, and in the environmental objects image, extracts object point feature and object line feature;
The point feature quantity of the environment point feature is counted, and, count the line feature quantity of the environment line feature;
According to the point feature quantity, the point feature weight of the point feature quantity and the line feature of the line feature quantity are determined
Weight;
According to the point feature weight and the line feature weight, the point feature quantity and the line feature quantity are added
Power adjustment obtains weighted point feature quantity and weighting line feature quantity;
The target point feature for meeting the weighted point feature quantity is extracted from the object point feature, and, from described right
Meet the score feature of the weighting line feature quantity as extracting in line feature;
According to the target point feature and the score feature, environmental objects are tracked;The environmental objects are used for for system pair
Moving object carries out motion tracking, and determines the corresponding pose of the moving object.
2. determining that the point is special the method according to claim 1, wherein described according to the point feature quantity
Levy the point feature weight of quantity and the line feature weight of the line feature quantity, comprising:
Obtain the reference characteristic quantity of the object point feature;
Calculate the point feature ratio of the point feature quantity Yu the reference characteristic quantity;
According to the point feature ratio, the point feature weight is determined;
According to the point feature weight, the line feature weight is calculated.
3. the method according to claim 1, wherein further include:
Judge whether the environmental objects image is key frame;
When the environmental objects image be key frame, determine the environmental objects image be key frame images;
Judge the key frame images with the presence or absence of winding;
When there are winding, the corresponding optimization poses of the upper key frame images of generation for the key frame images.
4. according to the method described in claim 3, it is characterized in that, described when the key frame images are there are winding, in generation
The corresponding optimization pose of one key frame images, comprising:
Obtain the corresponding history pose of a upper key frame images;
Obtain the corresponding Inertia information of a upper key frame images, and according to the Inertia information to the history pose into
Row updates, and generates the optimization pose.
5. the method according to claim 1, wherein further include:
The weighted point feature quantity is added with the weighting line feature quantity, obtains feature total amount;
When the feature total amount is less than preset amount threshold, the corresponding optimization pose of the environmental objects image is generated.
6. the method according to claim 1, wherein the extraction object point feature, comprising:
Obtain preset segmentation scale;
According to the segmentation scale, the environmental objects image is divided, sub-image area is obtained;
The point feature for extracting object in the sub-image area obtains subgraph point feature;
According to the subgraph point feature, the object point feature is generated.
7. the method according to claim 1, wherein the Inertia information includes: acceleration, angular speed, offset
At least one of amount and noise item.
8. a kind of environmental objects follow-up mechanism, which is characterized in that described device includes:
Module is obtained, for obtaining environmental objects image, and in the environmental objects image, extracts object point feature and object
Line feature;
Statistical module, for counting the point feature quantity of the environment point feature, and, the line for counting the environment line feature is special
Levy quantity;
Weight determining module, for according to the point feature quantity, determining the point feature weight of the point feature quantity and described
The line feature weight of line feature quantity;
Weighting adjustment module, is used for according to the point feature weight and the line feature weight, to the point feature quantity and institute
It states line feature quantity and is weighted adjustment, obtain weighted point feature quantity and weighting line feature quantity;
Extraction module, for extracting the target point feature for meeting the weighted point feature quantity from the object point feature,
And the score feature for meeting the weighting line feature quantity is extracted from the object line feature;
Tracing module, for tracking environmental objects according to the target point feature and the score feature;The environmental objects
For carrying out motion tracking to moving object for system, and determine the corresponding pose of the moving object.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 6 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608710.7A CN109712170B (en) | 2018-12-27 | 2018-12-27 | Environmental object tracking method and device based on visual inertial odometer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608710.7A CN109712170B (en) | 2018-12-27 | 2018-12-27 | Environmental object tracking method and device based on visual inertial odometer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109712170A true CN109712170A (en) | 2019-05-03 |
CN109712170B CN109712170B (en) | 2021-09-07 |
Family
ID=66258518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811608710.7A Active CN109712170B (en) | 2018-12-27 | 2018-12-27 | Environmental object tracking method and device based on visual inertial odometer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109712170B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103149939A (en) * | 2013-02-26 | 2013-06-12 | 北京航空航天大学 | Dynamic target tracking and positioning method of unmanned plane based on vision |
US20140241576A1 (en) * | 2013-02-28 | 2014-08-28 | Electronics And Telecommunications Research Institute | Apparatus and method for camera tracking |
US20150347840A1 (en) * | 2014-05-27 | 2015-12-03 | Murata Machinery, Ltd. | Autonomous vehicle, and object recognizing method in autonomous vehicle |
CN105283905A (en) * | 2013-06-14 | 2016-01-27 | 高通股份有限公司 | Robust tracking using point and line features |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106127810A (en) * | 2016-06-24 | 2016-11-16 | 惠州紫旭科技有限公司 | The recording and broadcasting system image tracking method of a kind of video macro block angle point light stream and device |
CN107784671A (en) * | 2017-12-01 | 2018-03-09 | 驭势科技(北京)有限公司 | A kind of method and system positioned immediately for vision with building figure |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN108665540A (en) * | 2018-03-16 | 2018-10-16 | 浙江工业大学 | Robot localization based on binocular vision feature and IMU information and map structuring system |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
-
2018
- 2018-12-27 CN CN201811608710.7A patent/CN109712170B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103149939A (en) * | 2013-02-26 | 2013-06-12 | 北京航空航天大学 | Dynamic target tracking and positioning method of unmanned plane based on vision |
US20140241576A1 (en) * | 2013-02-28 | 2014-08-28 | Electronics And Telecommunications Research Institute | Apparatus and method for camera tracking |
CN105283905A (en) * | 2013-06-14 | 2016-01-27 | 高通股份有限公司 | Robust tracking using point and line features |
US20150347840A1 (en) * | 2014-05-27 | 2015-12-03 | Murata Machinery, Ltd. | Autonomous vehicle, and object recognizing method in autonomous vehicle |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106127810A (en) * | 2016-06-24 | 2016-11-16 | 惠州紫旭科技有限公司 | The recording and broadcasting system image tracking method of a kind of video macro block angle point light stream and device |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN107784671A (en) * | 2017-12-01 | 2018-03-09 | 驭势科技(北京)有限公司 | A kind of method and system positioned immediately for vision with building figure |
CN108665540A (en) * | 2018-03-16 | 2018-10-16 | 浙江工业大学 | Robot localization based on binocular vision feature and IMU information and map structuring system |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
Non-Patent Citations (2)
Title |
---|
JEFFERY R. LAYNE ET AL: "密切关联的高距离分辨率自动目标识别与动目标显示跟踪***的多模型估计器", 《空载雷达》 * |
JUN LI ET AL: "Feature-point tracking approach to time-to-contact estimation for moving objects", 《CONTROL THEORY AND APPLICATIONS》 * |
Also Published As
Publication number | Publication date |
---|---|
CN109712170B (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Engel et al. | Direct sparse odometry | |
Dai et al. | Rgb-d slam in dynamic environments using point correlations | |
Felsberg et al. | The thermal infrared visual object tracking VOT-TIR2015 challenge results | |
Wuest et al. | Adaptive line tracking with multiple hypotheses for augmented reality | |
US8532367B2 (en) | System and method for 3D wireframe reconstruction from video | |
US11037325B2 (en) | Information processing apparatus and method of controlling the same | |
Schubert et al. | Direct sparse odometry with rolling shutter | |
US10825197B2 (en) | Three dimensional position estimation mechanism | |
CN106489170A (en) | The inertial navigation of view-based access control model | |
US20230206565A1 (en) | Providing augmented reality in a web browser | |
JP2013020616A (en) | Object tracking method and object tracking device | |
CN109974721A (en) | A kind of vision winding detection method and device based on high-precision map | |
Elhayek et al. | Fully automatic multi-person human motion capture for vr applications | |
Oka et al. | Head Pose Estimation System Based on Particle Filtering with Adaptive Diffusion Control. | |
Yao et al. | Robust RGB-D visual odometry based on edges and points | |
CN108876806A (en) | Method for tracking target and system, storage medium and equipment based on big data analysis | |
CN112861808A (en) | Dynamic gesture recognition method and device, computer equipment and readable storage medium | |
CN106846367A (en) | A kind of Mobile object detection method of the complicated dynamic scene based on kinematic constraint optical flow method | |
Rozumnyi et al. | Non-causal tracking by deblatting | |
Kottath et al. | Mutual information based feature selection for stereo visual odometry | |
Li et al. | Feature tracking based on line segments with the dynamic and active-pixel vision sensor (DAVIS) | |
Bang et al. | Camera pose estimation using optical flow and ORB descriptor in SLAM-based mobile AR game | |
Li et al. | RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments | |
CN107665495B (en) | Object tracking method and object tracking device | |
Dong et al. | Standard and event cameras fusion for feature tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 510070 15 building, 100 martyrs Road, Yuexiu District, Guangzhou, Guangdong. Patentee after: Institute of intelligent manufacturing, Guangdong Academy of Sciences Address before: 510070 15 building, 100 martyrs Road, Yuexiu District, Guangzhou, Guangdong. Patentee before: GUANGDONG INSTITUTE OF INTELLIGENT MANUFACTURING |