CN106127776A - Based on multiple features space-time context robot target identification and motion decision method - Google Patents

Based on multiple features space-time context robot target identification and motion decision method Download PDF

Info

Publication number
CN106127776A
CN106127776A CN201610491136.6A CN201610491136A CN106127776A CN 106127776 A CN106127776 A CN 106127776A CN 201610491136 A CN201610491136 A CN 201610491136A CN 106127776 A CN106127776 A CN 106127776A
Authority
CN
China
Prior art keywords
target
frame
context
image block
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610491136.6A
Other languages
Chinese (zh)
Other versions
CN106127776B (en
Inventor
贾松敏
徐涛
曾迪诗
宣璇
李秀智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201610491136.6A priority Critical patent/CN106127776B/en
Publication of CN106127776A publication Critical patent/CN106127776A/en
Application granted granted Critical
Publication of CN106127776B publication Critical patent/CN106127776B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Based on multiple features space-time context robot target identification and motion decision method, belong to robot field.This method carries out Block Cluster first with color and the textural characteristics of image to image, completes the initialization of the space-time context model of the first frame target, and constructs the rarefaction representation equation of object block.Then with cluster block as ultimate unit, the cluster the value of the confidence in cluster process is combined with contextual feature and sets up the confidence map with image block as ultimate unit.Finally, confidence map likelihood probability maximum is the target location of next frame of prediction.Compared with former method, present invention enhances clarification of objective and describe, improve target robustness under complex background, it addition, propose the ideological guarantee of the addition Block Cluster real-time of algorithm.After algorithm completes the recognition and tracking of target, to follow the tracks of the result basis as robot motion's decision-making, complete the robot recognition and tracking to target.

Description

Based on multiple features space-time context robot target identification and motion decision method
Technical field
The invention belongs to robot field, be robot target identification based on multiple features space-time context and motion decision-making Method.
Background technology
Along with the range of application of robot is more and more extensive, the correlation technique of intelligent robot receives attention greatly. Particularly intellect service robot is a popular direction of robot development in recent years, and Intelligent Service machine people is right for service As providing service, first have to carry out the identification of target and carry out the motion decision-making of robot accordingly.Target recognition is machine vision The classical part of area research, domestic and international research worker proposes the tracking of many.Mean Shift algorithm be identify with The classic algorithm of track, for the deficiency of color histogram graph expression color of object distribution in this algorithm, Li proposes a kind of based on poly- The adaptive color rectangular histogram of class algorithm, it is achieved that good recognition and tracking effect.For reducing illumination variation to the process of tracking Impact, Lu et al. utilizes local binary textural characteristics to realize the recognition and tracking to target.But the description that single features is to target Not comprehensive, the poor robustness of tracking, for this problem, Nigam proposes based on particle filter frame fusion gradient and color Feature carries out target recognition tracking, Gu et al. have studied the multiple features fusion target recognition of self adaptation additivity and the property taken advantage of variable weight with Track algorithm.In addition, also have many target recognition track algorithms, but these algorithms become at reply target occlusion and environment All not can solve when changing violent situation.On this problem, Yang et al. proposes to utilize auxiliary mark or target office The solution that portion's contextual information assists target following to be this difficult problem provides new thinking, and Zhang et al. is based on Bayes Framework utilizes spatio-temporal context information more to newly arrive and realizes target following, reply block and during illumination variation situation performance good, But this algorithm only employs the single features of image, quickly move and robustness in the case of background acute variation in target Difference.
Summary of the invention
For with present on problem, it is upper and lower that the present invention proposes a kind of space-time merging multiple features based on space-time context The method for tracking target that literary composition piecemeal sparsity structure represents, i.e. color and textural characteristics first with image carry out piecemeal to image and gather Class, completes the initialization of the space-time context model of the first frame target, and constructs the rarefaction representation equation of object block.Then with poly- Class block is ultimate unit, and the foundation that the cluster the value of the confidence in cluster process combined with contextual feature is with image block for the most single The confidence map of position.Finally, confidence map likelihood probability maximum is the target location of next frame of prediction.With former method phase Ratio, present invention enhances clarification of objective and describes, improve target robustness under complex background, divides it addition, propose to add The ideological guarantee real-time of algorithm of block cluster.After algorithm completes the recognition and tracking of target, to follow the tracks of result as machine The basis of people's motion decision-making, completes the robot recognition and tracking to target.
The technical solution used in the present invention is as follows:
First, carry out scene image segmentation, use simple linear Iterative Clustering (SLIC), utilize image color, Texture and range information carry out cluster and form image block pixel, then give different weights to each image block and screen, Obtain the rarefaction representation equation of image block, the most just complete segmentation and the screening of image.Below to split after picture segmentation To image block be that ultimate unit sets up space-time context model, in the case of known to present frame target location (the first frame by Be manually specified), first obtain the context prior model of image block, be then based under Bayesian frame by target location confidence map with The spatial context model to current frame image of the context prior model of image block, then carries out spatial context and updates To the space-time context model of next frame image, and carry out the renewal of scale parameter.Obtain the space-time context model of next frame After, the confidence of target location is obtained under treatment during a frame by the context prior model of its image block and space-time context model Figure, is i.e. considered the center of target at confidence map maximum probability, the tracking of target the most just completes.After obtaining target location Again its space-time context model is updated, circulates the purpose reaching real-time tracking target with this.Finally obtain mesh in tracking Behind target position, carry out the motion decision-making of robot according to the position of target, it is achieved the robot recognition and tracking to target.Specifically Comprise the following steps:
Step 1, scene image is split
Utilize the color of image, texture and distance similarity feature that pixel is clustered, use the three of Lab color space Dimension colouring information and positional information, introduce local entropy and represent the feature of pixel, and formation compactedness is good, border laminating degree high, shape The image block of shape rule, it is achieved the cluster to pixel.
Step 1.1, obtains local entropy
Local entropy hiBy formula (1) approximate representation:
Wherein piRepresent that current pixel i accounts for the probability of the total number of local pixel.
Step 1.2, clusters pixel
Set i from initial cluster center Ck=[ak,bk,hk,xk,yk] start sampling, for reducing sub pixel misspecification Probability and edge noise interference, it is the neighborhood of 3x3 that cluster centre is limited to minimum gradient amount, and each pixel is with recently Adjacent cluster centre distance DiCluster, such as formula (2):
Wherein ak,bk, for the Lab color space [L of pixel kk,ak,bk] color component, hiFor the office required by formula (1) Portion's entropy, [xi,yi] it is the transverse and longitudinal coordinate of pixel i, μ is weight empirical parameter, takes μ=0.4 here.
Step 1.3, updates cluster centre
Once pixel cluster is to closest to center pixel, then update cluster centre ΦkReplace all pictures in cluster areas The average vector of element:
Wherein, ZiRepresent with ΦkCentered by cluster areas, the number of pixels that N is comprised in representing region, CiRepresent poly- Class center, SiFor two-dimensional spatial location.
Step 2, image block based on rarefaction representation screens
Introduce rarefaction representation and each piecemeal is given different weight, and as in target after arest neighbors region is screened Context area block, it is possible to preferably tackle the situation of target occlusion and improve real-time.
Step 2.1, sets up the sparse equation of image block, and composes weight
For the n-th piecemeal yn, the rarefaction representation equation of Weight is:
Wherein, cluster centre distance DnCalculated by formula (2), SnFor two-dimensional spatial location.AnRepresent the sparse system of n-th piece Number vector, WnIt is sparse weight coefficient,Being the average of different piecemeal sparse coefficient in context area, it is normal that η, μ are positive number Amount, for the impact on rarefaction representation of the specification similarity constraint.
Step 2.2, solves sparse equation
Solve sparse equation and be AnAnd WnIt is an optimization problem, solves by iterative method.Main thought assumes that WnFor fixed value, update AnValue, then with the A calculatednValue is fixed value, then solves Wn, constantly repeat, until AnAnd WnReceive Hold back to Local Minimum or reach iterations stop value.Assume weight W of all piecemeals1,W2,…,WmIt is it is known that by formula (4) sparse coefficient can be obtained:
Wherein, Pn=(DTD+Λ(ηWn))-1,Mn=PnDn Tyn,D =ΦkΛ is unit diagonal matrix.Piecemeal y is obtained by above formulanSparse coefficient vector AnAfter, weight is obtained by following formula:
Can obtain:
Wherein, L is Lagrange multiplier,
Iterations limits and is set to tmax, use sparse coefficient A of solution by iterative method the n-th piecemealnWith weight WnStep Can be summarized as follows:
Input: (1) Di,(2)yi
1) initialize, Wn=1, n=1,2 ..., m
2)
3) A is calculated by formula (11)n
4) W is calculated by formula (13)n
5)end while
Output: (1) An, (2) Wn
Step 3, sets up space-time context model
To complete segmentation and the rarefaction screening of image in above-mentioned steps, obtain with picture block as ultimate unit Sparse graph picture, will be that the first frame is directly specified below in the case of known present frame i.e. t frame i.e. target location, set up based on The context model of image block, carries out target following for next frame i.e. t+1 frame and prepares.
Step 3.1, sets up the contextual feature of target area cluster centre
Assume that the context area of target is divided into McIndividual block, uses RnD () represents target context region in n-th frame The d image block, its cluster centre position is set to CR (d), and visual signature is used insteadRepresenting, the context of cluster centre is special The definition levied is as follows:
Step 3.2, sets up the context prior model of image block
By the visual signature of image block is weighted, set up context prior model based on image block:
Wherein,wσFor the weighting function of vision noticing mechanism, this parameter is based on figure As the cluster centre of block determines to the distance of target's center position, the image block that the current location of distance objective is the nearest then gives Greater weight, its contextual information is more important to the prediction of next frame target location.
Step 3.3, it is thus achieved that spatial context model based on image block
Now to obtain the present frame i.e. target location confidence map of t frame and context prior model, by Bayesian frame The spatial context model of present frame can be obtained, for improving the calculating time, carry out FFT computing acceleration, then t frame based on image block Spatial context model be:
Wherein F-1() represents Fourier inversion, and F () represents Fourier transformation.
Step 3.4, space-time updating context
It is weighted adding up to spatial context, obtains carrying out for next frame (t+1 frame) space-time of target following Hereafter model:
Wherein, ρ is undated parameter.For the first two field picture, its spatial context model is space-time context model.
Step 4, target following
Step 4.1, sets up confidence map
To being obtained space-time context model by former frame i.e. t frame, next frame i.e. t+1 frame is carried out target detection tracking, The most first obtain this frame i.e. context prior model of t+1 frame, such as formula (9), then by space-time context model and present frame The i.e. context prior model of t+1 frame sets up the confidence map based on image block of t+1 frame, such as following formula:
Step 4.2, follows the tracks of target position
What confidence map calculated is the target probit that occurs in each cluster centre, so i.e. recognizing at confidence map maximum probability For being the position at target place, it may be assumed that
Step 4.3, yardstick updates
The profile of target, size are always in change, so also to update therewith to improve robustness scale parameter σ, More new formula is as follows:
Wherein,Represent target location, σt+1Represent the scale parameter after updating.
Step 5, robot motion's decision-making
The hardware platform of this method is the robot moving platform carrying kinect, the camera collection on kinect use Scene graph in target detection.For making the target of following of robot energy continuous-stable, use intelligence based on fuzzy control rule Speed governing algorithm, controls the left and right wheel speed of robot.According to robot motion model (see accompanying drawing 2), robot is with linear velocity v During traveling, its left and right wheels speed can be respectively calculated as follows:
Wherein, K steering gain, 2d is robot two-wheeled spacing.
Step 5.1, determines that membership function, fuzzy set carry out obfuscation
Linear function can quickly adjust bigger man-machine distance and man-machine range rate, and the change of shaped form function is smooth, Be conducive to the stationarity controlled.When man-machine distance and man-machine range rate are bigger, use triangular membership;When man-machine When distance is in safety range, use Gaussian membership function.
The effect of obfuscation is that the precise volume of input is changed into obfuscation amount, takes XrFuzzy subset, in its domain All it is divided into 5 set: " recently (VN) ", " nearly (N) ", " normal (ZE) ", " remote (F) ", " farthest (VF) ";Take vpxFuzzy son Collection, is all divided into 5 set in its domain: " negative big (NB) ", " negative little (NS) ", " normally (ZE) ", " the least (PS) ", " just Greatly (PB) ";Take the fuzzy subset of ν, its domain be all divided into 5 set: " the least (VL) ", " little (L) ", " in (M) ", " big (H) ", " very big (VH) ";Take Yr、vpxFuzzy subset, its domain is all divided into 5 set: " negative big (NB) ", " negative Little (NS) ", " normal (ZE) ", " the least (PS) ", " honest (PB) ";Take the fuzzy subset of K, its domain is all divided into 5 Set: " the least (VL) ", " little (L) ", " in (M) ", " big (H) ", " very big (VH) ".Effective opinion of parameter is obtained by test Territory: Xr∈ [0,3], vpx∈ [-1,1], v ∈ [0,200], Yr∈ [-1,1], vpy∈ [-1,1], K ∈ [0,3].
Step 5.2, sets up and controls rule
R1i: if Q1=Ai and Q2=Bi, then v=Ci
R2i: if Q3=Di and Q4=Ei, then K=Fi
R1iOn the basis of linear velocity fuzzy Control rule, Q1Represent man-machine vertical dimension linguistic variable, Q2Represent people Machine vertical dimension rate of change linguistic variable.Q3Represent man-machine horizontal range linguistic variable, Q4Represent man-machine horizontal range rate of change language Speech variable, ν and K represents datum line speed and turning gain linguistic variable respectively.Their Linguistic Value mould in corresponding domain Stick with paste subset and be respectively Ai、Bi、Ci、Di、Ei、Fi
Table 1 datum line velocity ambiguity controls rule
Table 2 is turned Gain Fuzzy control rule table
Robot linear velocity is adjusted according to datum line Fuzzy controller, when man-machine vertical dimension is more than safe distance, For quickly following target, system will increase robot motion's linear velocity;When man-machine vertical dimension is less than safe distance, reduce linear speed Spend with ensure man-machine between safe distance;When man-machine vertical dimension is too small, robot stop motion is in case man-machine collision.According to Turning Gain Fuzzy controller adjusts steering gain, and when man-machine horizontal range is excessive, steering gain increases, and radius of turn subtracts Little, robot traveling process quickly adjusts and turns to ensure that target is in central region position.Rule is as shown in Table 1 and Table 2.
Step 5.3 ambiguity solution
Through logical judgment, utilize centroid method defuzzification.The stability of the Fuzzy control system for being described by rule Problem, can analyze its stability according to fuzzy set theory by relational matrix.
It is complete to one algorithm cycle of this step, after obtaining the target location of t+1 frame, repeat the above steps, more The spatial context model of new t+1 frame and space-time context model, prepare for the next frame i.e. target update of t+2 frame, And carry out robot control according to the result followed the tracks of, robot gather the next frame i.e. scene image of t+2 frame and proceed Aforesaid operations, constantly circulation realizes real-time target following.
Accompanying drawing explanation
Fig. 1 is the flow chart of method involved in the present invention;
Fig. 2 robot motion model;
Detailed description of the invention
Below in conjunction with the accompanying drawings patent of the present invention is further elaborated.
Step 1, scene image is split
Use simple linear Iterative Clustering (SLIC) image is split, utilize the color of image, texture and away from From similarity feature, pixel is clustered, form the image block that the obvious compactedness in border is good.
Step 1.1, obtains local entropy
Step 1.2, clusters pixel
Step 1.3, updates cluster centre
Step 2, image block based on rarefaction representation screens
In order to preferably tackle situation about blocking and improve real-time, image block is carried out rarefaction representation, to different figures Shape block gives different weights, filters out neighbour's block of object block as context area.
Step 2.1, sets up the sparse equation of image block, and composes weight
Step 2.2, solves sparse equation
After obtaining the rarefaction representation of image block, sparse equation is solved, obtain sparse coefficient and weight.
Step 3, sets up space-time context model
Set up space-time context model based on image block obtained above, prepare for target following below.
Step 3.1, sets up the contextual feature of target area cluster centre
Target area contextual feature is obtained in the case of known to target area.
Step 3.2, sets up the context prior model of image block
Then current context prior model is obtained.
Step 3.3, it is thus achieved that spatial context model based on image block
By target location and context priori target, under Bayesian frame, obtain spatial context model.
Step 3.4, space-time updating context
By the weighting of spatial context more newly obtained space-time context model, the target following for next frame is prepared.
Step 4, target following
The space-time context model obtained by previous frame obtains target location confidence with the context prior model of present frame Figure, is considered target location at confidence map maximum probability.After obtaining target location, the spatial context model to frame in the ban is carried out Updating, and then obtain new space-time context model, the target following continuing as next frame is prepared.
Step 4.1, sets up confidence map
Step 4.2, follows the tracks of target position
Step 4.3, yardstick updates
For adapting to target shape in motor process, target area done yardstick and updates by the change of size.
Step 5, robot motion's decision-making
Result according to target location obtained above carries out the motion decision-making of robot, uses the controlling party of fuzzy control Method.
Step 5.1, determines that membership function, fuzzy set carry out obfuscation
Step 5.2, sets up and controls rule
Step 5.3 ambiguity solution.

Claims (2)

1. based on multiple features space-time context robot target identification and motion decision method, it is characterised in that:
The technical scheme that this method uses is as follows:
First, carry out scene image segmentation, use simple linear Iterative Clustering, utilize the color of image, texture and distance Information carries out cluster and forms image block pixel, then gives different weights to each image block and screens, obtains image block Rarefaction representation equation, the most just complete segmentation and the screening of image;The image block obtained with segmentation below after picture segmentation Setting up space-time context model for ultimate unit, in the case of known to present frame target location, i.e. the first frame is by being manually specified, First obtaining the context prior model of image block, be then based under Bayesian frame by target location confidence map and image block is upper The hereafter spatial context model to current frame image of prior model, then carries out spatial context more newly obtained next frame figure The space-time context model of picture, and carry out the renewal of scale parameter;After obtaining the space-time context model of next frame, under treatment Obtained the confidence map of target location during one frame by the context prior model of its image block and space-time context model, confidence map is general Rate maximum is i.e. considered the center of target, and the tracking of target the most just completes;Obtain behind target location again to its space-time Context model is updated, and circulates the purpose reaching real-time tracking target with this;Finally after following the tracks of the position obtaining target, Position according to target carries out the motion decision-making of robot, it is achieved the robot recognition and tracking to target;Specifically include following step Rapid:
Step 1, scene image is split
Utilize the color of image, texture and distance similarity feature that pixel is clustered, use the three-dimensional face of Lab color space Color information and positional information, introduce local entropy and represent the feature of pixel, and formation compactedness is good, border laminating degree high, shape rule Image block then, it is achieved the cluster to pixel;
Step 1.1, obtains local entropy
Local entropy hiBy formula (1) approximate representation:
h i = - Σ 1 256 p i log p i - - - ( 1 )
Wherein piRepresent that current pixel i accounts for the probability of the total number of local pixel;
Step 1.2, clusters pixel
Set i from initial cluster center Ck=[ak,bk,hk,xk,yk] start sampling, for reduce sub pixel misspecification can Energy property and edge noise interference, it is the neighborhood of 3x3 that cluster centre is limited to minimum gradient amount, and each pixel is gathered with arest neighbors Class centre distance DiCluster, such as formula (2):
D i = μ ( a k - a i ) 2 + ( b k - b i ) 2 + μ | h k - h i | + ( 1 - 2 μ ) ( x k - x i ) 2 + ( y k - y i ) 2 - - - ( 2 )
Wherein ak,bk, for the Lab color space [L of pixel kk,ak,bk] color component, hiFor the local entropy required by formula (1), [xi,yi] it is the transverse and longitudinal coordinate of pixel i, μ is weight empirical parameter, takes μ=0.4 here;
Step 1.3, updates cluster centre
Once pixel cluster is to closest to center pixel, then update cluster centre ΦkReplace putting down of all pixels in cluster areas All vectors:
Φ k = 1 N Σ i ∈ Z l C i S i - - - ( 3 )
Wherein, ZiRepresent with ΦkCentered by cluster areas, the number of pixels that N is comprised in representing region, CiRepresent in cluster The heart, SiFor two-dimensional spatial location;
Step 2, image block based on rarefaction representation screens
Introduce rarefaction representation and each piecemeal is given different weight, and as target context after arest neighbors region is screened Region unit, it is possible to preferably tackle the situation of target occlusion and improve real-time;
Step 2.1, sets up the sparse equation of image block, and composes weight
For the n-th piecemeal yn, the rarefaction representation equation of Weight is:
arg m i n W n , S n ( | | y n - D n S n | | + η | | A n | | 2 2 + μW n | | A n - A ‾ | | 2 2 ) - - - ( 4 )
Wherein, cluster centre distance DnCalculated by formula (2), SnFor two-dimensional spatial location;AnRepresent the sparse coefficient of n-th piece to Amount, WnIt is sparse weight coefficient,Being the average of different piecemeal sparse coefficient in context area, η, μ are positive number constant, use In the impact on rarefaction representation of the specification similarity constraint;
Step 2.2, solves sparse equation
Solve sparse equation and be AnAnd WnIt is an optimization problem, solves by iterative method;Main thought assumes that WnFor Fixed value, updates AnValue, then with the A calculatednValue is fixed value, then solves Wn, constantly repeat, until AnAnd WnConverge to Local Minimum or reach iterations stop value;Assume weight W of all piecemeals1,W2,…,WmIt is it is known that can by formula (4) Sparse coefficient:
A n = M n + μW n P n Q Σ l = 1 m ( W l A l ) / Σ l = 1 m W l - - - ( 5 )
Wherein, Pn=(DTD+Λ(ηWn))-1,Mn=PnDn Tyn,D= ΦkΛ is unit diagonal matrix;Piecemeal y is obtained by above formulanSparse coefficient vector AnAfter, weight is obtained by following formula:
arg m i n W n Σ n = 1 m ( μW n | | A i - A ‾ | | 2 + LW n ln W n ) - - - ( 6 )
Can obtain:
W n = e ( - 1 - μ | | A i - A ‾ | | / L ) - - - ( 7 )
Wherein, L is Lagrange multiplier,
Iterations limits and is set to tmax, use sparse coefficient A of solution by iterative method the n-th piecemealnWith weight WnStep permissible It is summarized as follows:
Input: (1) Di,(2)yi
1) initialize, Wn=1, n=1,2 ..., m
2)
3) A is calculated by formula (11)n
4) W is calculated by formula (13)n
5)end while
Output: (1) An, (2) Wn
Step 3, sets up space-time context model
To complete the segmentation of image and rarefaction screening in above-mentioned steps, obtained with picture block as ultimate unit is sparse Image, will be that the first frame is directly specified below in the case of known present frame i.e. t frame i.e. target location, set up based on image The context model of block, carries out target following for next frame i.e. t+1 frame and prepares;
Step 3.1, sets up the contextual feature of target area cluster centre
Assume that the context area of target is divided into McIndividual block, uses RnD () represents d of target context region in n-th frame Image block, its cluster centre position is set to CR (d), and visual signature is used insteadRepresent, the contextual feature of cluster centre Definition is as follows:
x c t x = { c t x ( C R ( d ) ) = ( f n d ( x n ) , C R ( d ) ) , d = 1 , ... , M c } - - - ( 8 )
Step 3.2, sets up the context prior model of image block
By the visual signature of image block is weighted, set up context prior model based on image block:
P ( c t x ( C R ( d ) ) | O ) = f n d ( x n ) w σ ( C R ( d ) - x n ) W n - - - ( 9 )
Wherein,wσFor the weighting function of vision noticing mechanism, this parameter is based on image block Cluster centre determine to the distance of target's center position, the image block that the current location of distance objective is the nearest then gives bigger Weight, its contextual information is more important to the prediction of next frame target location;
Step 3.3, it is thus achieved that spatial context model based on image block
Now to obtain the present frame i.e. target location confidence map of t frame and context prior model, Bayesian frame can obtain The spatial context model of present frame, for improving the calculating time, carries out FFT computing acceleration, the then sky based on image block of t frame Between context model be:
H n c t x ( R n ( d ) ) = F - 1 ( F ( C R ( x n ) ) F ( f n d ( x n ) w σ ( C R ( d ) - x n ) ) ) - - - ( 10 )
Wherein F-1() represents Fourier inversion, and F () represents Fourier transformation;
Step 3.4, space-time updating context
It is weighted adding up to spatial context, obtains carrying out for next frame (t+1 frame) the space-time context of target following Model:
H n + 1 s t c ( R n + 1 ( d ) ) = ( 1 - ρ ) H n s t c ( R n ( d ) ) + ρH n c t x ( R n ( d ) ) - - - ( 11 )
Wherein, ρ is undated parameter;For the first two field picture, its spatial context model is space-time context model;
Step 4, target following
Step 4.1, sets up confidence map
To being obtained space-time context model by former frame i.e. t frame, next frame i.e. t+1 frame is carried out target detection tracking, first First obtain this frame i.e. context prior model of t+1 frame, such as formula (9), then by space-time context model and present frame i.e. t The context prior model of+1 frame sets up the confidence map based on image block of t+1 frame, such as following formula:
Step 4.2, follows the tracks of target position
What confidence map calculated is the target probit that occurs in each cluster centre, so being i.e. considered at confidence map maximum probability The position at target place, it may be assumed that
x n + 1 * = arg x ∈ Ω c t x ( x n * ) max C R ( x n + 1 ) - - - ( 13 )
Step 4.3, yardstick updates
The profile of target, size are always in change, so also to update therewith to improve robustness scale parameter σ, update Formula is as follows:
Wherein,Represent target location, σt+1Represent the scale parameter after updating;
Step 5, robot motion's decision-making
The hardware platform of this method is the robot moving platform carrying kinect, by the camera collection on kinect for mesh The scene graph of mark detection;For making the target of following of robot energy continuous-stable, use intelligent speed-regulating based on fuzzy control rule Algorithm, controls the left and right wheel speed of robot;According to robot motion model, when robot advances with linear velocity v, its left side Right wheel speed can be respectively calculated as follows:
v l = v ( 1 - 2 dKY r / ( X r 2 + Y r 2 ) ) v r = v ( 1 + 2 dKY r / ( X r 2 + Y r 2 ) ) - - - ( 16 )
Wherein, K steering gain, 2d is robot two-wheeled spacing;
Step 5.1, determines that membership function, fuzzy set carry out obfuscation
Linear function can quickly adjust bigger man-machine distance and man-machine range rate, and the change of shaped form function is smooth, favorably In the stationarity controlled;When man-machine distance and man-machine range rate are bigger, use triangular membership;When man-machine distance Time in safety range, use Gaussian membership function;
The effect of obfuscation is that the precise volume of input is changed into obfuscation amount, takes XrFuzzy subset, all divide in its domain It is 5 set: " recently (VN) ", " nearly (N) ", " normal (ZE) ", " remote (F) ", " farthest (VF) ";Take vpxFuzzy subset, Its domain is all divided into 5 set: " negative big (NB) ", " negative little (NS) ", " normal (ZE) ", " the least (PS) ", " honest (PB)”;Take the fuzzy subset of ν, its domain is all divided into 5 set: " the least (VL) ", " little (L) ", " in (M) ", " big (H) ", " very big (VH) ";Take Yr、vpxFuzzy subset, its domain is all divided into 5 set: " negative big (NB) ", " negative little (NS) ", " normal (ZE) ", " the least (PS) ", " honest (PB) ";Take the fuzzy subset of K, its domain is all divided into 5 collection Close: " the least (VL) ", " little (L) ", " in (M) ", " big (H) ", " very big (VH) ";Effective domain of parameter is obtained by test: Xr∈ [0,3], vpx∈ [-1,1], v ∈ [0,200], Yr∈ [-1,1], vpy∈ [-1,1], K ∈ [0,3];
Step 5.2, sets up and controls rule
R1i: if Q1=Ai and Q2=Bi, then v=Ci
R2i: if Q3=Di and Q4=Ei, then K=Fi
R1iOn the basis of linear velocity fuzzy Control rule, Q1Represent man-machine vertical dimension linguistic variable, Q2Represent man-machine vertically Range rate linguistic variable;Q3Represent man-machine horizontal range linguistic variable, Q4Represent that man-machine horizontal range rate of change language becomes Amount, ν and K represents datum line speed and turning gain linguistic variable respectively;Their Linguistic Value fuzzy son in corresponding domain Collection is respectively Ai、Bi、Ci、Di、Ei、Fi
Table 1 datum line velocity ambiguity controls rule
Table 2 is turned Gain Fuzzy control rule table
Robot linear velocity is adjusted, when man-machine vertical dimension is more than safe distance, for soon according to datum line Fuzzy controller Speed follows target, and system will increase robot motion's linear velocity;When man-machine vertical dimension is less than safe distance, reduce linear velocity with Ensure man-machine between safe distance;When man-machine vertical dimension is too small, robot stop motion is in case man-machine collision;According to turning Gain Fuzzy controller adjusts steering gain, and when man-machine horizontal range is excessive, steering gain increases, and radius of turn reduces, machine Device people quickly adjusts during advancing and turns to ensure that target is in central region position;Rule is as shown in Table 1 and Table 2;
Step 5.3 ambiguity solution
Through logical judgment, utilize centroid method defuzzification;The stability problem of the Fuzzy control system for being described by rule, Its stability can be analyzed by relational matrix according to fuzzy set theory;
It is complete to one algorithm cycle of this step, after obtaining the target location of t+1 frame, repeat the above steps, update t The spatial context model of+1 frame and space-time context model, prepare for the next frame i.e. target update of t+2 frame, and according to The result followed the tracks of carries out robot control, robot gather the next frame i.e. scene image of t+2 frame and proceed above-mentioned behaviour Making, constantly circulation realizes real-time target following.
The most according to claim 1 based on multiple features space-time context robot target identification and motion decision method, its It is characterised by:
Step 1, scene image is split
Use simple linear Iterative Clustering that image is split, utilize the color of image, texture and distance similarity special Levy and pixel is clustered, form the image block that the obvious compactedness in border is good;
Step 1.1, obtains local entropy
Step 1.2, clusters pixel
Step 1.3, updates cluster centre
Step 2, image block based on rarefaction representation screens
In order to preferably tackle situation about blocking and improve real-time, image block is carried out rarefaction representation, to different graph blocks Give different weights, filter out neighbour's block of object block as context area;
Step 2.1, sets up the sparse equation of image block, and composes weight
Step 2.2, solves sparse equation
After obtaining the rarefaction representation of image block, sparse equation is solved, obtain sparse coefficient and weight;
Step 3, sets up space-time context model
Set up space-time context model based on image block obtained above, prepare for target following below;
Step 3.1, sets up the contextual feature of target area cluster centre
Target area contextual feature is obtained in the case of known to target area;
Step 3.2, sets up the context prior model of image block
Then current context prior model is obtained;
Step 3.3, it is thus achieved that spatial context model based on image block
By target location and context priori target, under Bayesian frame, obtain spatial context model;
Step 3.4, space-time updating context
By the weighting of spatial context more newly obtained space-time context model, the target following for next frame is prepared;
Step 4, target following
The space-time context model obtained by previous frame obtains target location confidence map with the context prior model of present frame, puts It is considered target location at letter figure maximum probability;After obtaining target location, the spatial context model to frame in the ban is updated, And then obtaining new space-time context model, the target following continuing as next frame is prepared;
Step 4.1, sets up confidence map
Step 4.2, follows the tracks of target position
Step 4.3, yardstick updates
For adapting to target shape in motor process, target area done yardstick and updates by the change of size;
Step 5, robot motion's decision-making
Result according to target location obtained above carries out the motion decision-making of robot, uses the control method of fuzzy control;
Step 5.1, determines that membership function, fuzzy set carry out obfuscation
Step 5.2, sets up and controls rule
Step 5.3 ambiguity solution.
CN201610491136.6A 2016-06-28 2016-06-28 It is identified based on multiple features space-time context robot target and moves decision-making technique Expired - Fee Related CN106127776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610491136.6A CN106127776B (en) 2016-06-28 2016-06-28 It is identified based on multiple features space-time context robot target and moves decision-making technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610491136.6A CN106127776B (en) 2016-06-28 2016-06-28 It is identified based on multiple features space-time context robot target and moves decision-making technique

Publications (2)

Publication Number Publication Date
CN106127776A true CN106127776A (en) 2016-11-16
CN106127776B CN106127776B (en) 2019-05-03

Family

ID=57286062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610491136.6A Expired - Fee Related CN106127776B (en) 2016-06-28 2016-06-28 It is identified based on multiple features space-time context robot target and moves decision-making technique

Country Status (1)

Country Link
CN (1) CN106127776B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952290A (en) * 2017-04-07 2017-07-14 深圳大学 A kind of method and system that turning maneuvering target is tracked for three dimensions
CN106952294A (en) * 2017-02-15 2017-07-14 北京工业大学 A kind of video tracing method based on RGB D data
CN107038431A (en) * 2017-05-09 2017-08-11 西北工业大学 Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108229442A (en) * 2018-02-07 2018-06-29 西南科技大学 Face fast and stable detection method in image sequence based on MS-KCF
CN108388879A (en) * 2018-03-15 2018-08-10 斑马网络技术有限公司 Mesh object detection method, device and storage medium
CN109325426A (en) * 2018-09-03 2019-02-12 东南大学 A kind of black smoke vehicle detection method based on three orthogonal plane space-time characteristics
CN110264577A (en) * 2019-06-26 2019-09-20 中国人民解放***箭军工程大学 A kind of collision real-time detection method based on temporal and spatial correlations tracking strategy
CN110348492A (en) * 2019-06-24 2019-10-18 昆明理工大学 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN110570418A (en) * 2019-09-12 2019-12-13 广东工业大学 Woven label defect detection method and device
CN110580479A (en) * 2019-08-27 2019-12-17 天津大学 Electronic speckle interference fringe pattern binarization method based on entropy and clustering algorithm
CN112613565A (en) * 2020-12-25 2021-04-06 电子科技大学 Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN112907630A (en) * 2021-02-06 2021-06-04 洛阳热感科技有限公司 Real-time tracking method based on mean shift prediction and space-time context information
CN114387272A (en) * 2022-03-23 2022-04-22 武汉富隆电气有限公司 Cable bridge defective product detection method based on image processing
CN117152213A (en) * 2023-09-14 2023-12-01 西南科技大学 Fuzzy target detection and tracking method and system
CN117152213B (en) * 2023-09-14 2024-07-16 西南科技大学 Fuzzy target detection and tracking method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011090789A1 (en) * 2010-01-22 2011-07-28 Thomson Licensing Method and apparatus for video object segmentation
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011090789A1 (en) * 2010-01-22 2011-07-28 Thomson Licensing Method and apparatus for video object segmentation
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHENHAI WANG ET AL: "An effective object tracking based on spatio-temporal context learning and Hog", 《2015 11TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC)》 *
吕枘蓬 等: "基于TLD框架的上下文目标跟踪算法", 《电视技术》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952294A (en) * 2017-02-15 2017-07-14 北京工业大学 A kind of video tracing method based on RGB D data
CN106952294B (en) * 2017-02-15 2019-10-08 北京工业大学 A kind of video tracing method based on RGB-D data
CN106952290A (en) * 2017-04-07 2017-07-14 深圳大学 A kind of method and system that turning maneuvering target is tracked for three dimensions
CN106952290B (en) * 2017-04-07 2019-05-10 深圳大学 A kind of method and system tracking turning maneuvering target for three-dimensional space
CN107038431A (en) * 2017-05-09 2017-08-11 西北工业大学 Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN108229442B (en) * 2018-02-07 2022-03-11 西南科技大学 Method for rapidly and stably detecting human face in image sequence based on MS-KCF
CN108229442A (en) * 2018-02-07 2018-06-29 西南科技大学 Face fast and stable detection method in image sequence based on MS-KCF
CN108388879A (en) * 2018-03-15 2018-08-10 斑马网络技术有限公司 Mesh object detection method, device and storage medium
CN108388879B (en) * 2018-03-15 2022-04-15 斑马网络技术有限公司 Target detection method, device and storage medium
CN109325426A (en) * 2018-09-03 2019-02-12 东南大学 A kind of black smoke vehicle detection method based on three orthogonal plane space-time characteristics
CN109325426B (en) * 2018-09-03 2021-11-02 东南大学 Black smoke vehicle detection method based on three orthogonal planes time-space characteristics
CN110348492A (en) * 2019-06-24 2019-10-18 昆明理工大学 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN110264577B (en) * 2019-06-26 2020-04-17 中国人民解放***箭军工程大学 Real-time collision detection method based on space-time correlation tracking strategy
CN110264577A (en) * 2019-06-26 2019-09-20 中国人民解放***箭军工程大学 A kind of collision real-time detection method based on temporal and spatial correlations tracking strategy
CN110580479A (en) * 2019-08-27 2019-12-17 天津大学 Electronic speckle interference fringe pattern binarization method based on entropy and clustering algorithm
CN110570418A (en) * 2019-09-12 2019-12-13 广东工业大学 Woven label defect detection method and device
CN110570418B (en) * 2019-09-12 2022-01-11 广东工业大学 Woven label defect detection method and device
CN112613565A (en) * 2020-12-25 2021-04-06 电子科技大学 Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN112613565B (en) * 2020-12-25 2022-04-19 电子科技大学 Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN112907630A (en) * 2021-02-06 2021-06-04 洛阳热感科技有限公司 Real-time tracking method based on mean shift prediction and space-time context information
CN114387272A (en) * 2022-03-23 2022-04-22 武汉富隆电气有限公司 Cable bridge defective product detection method based on image processing
CN114387272B (en) * 2022-03-23 2022-05-24 武汉富隆电气有限公司 Cable bridge defective product detection method based on image processing
CN117152213A (en) * 2023-09-14 2023-12-01 西南科技大学 Fuzzy target detection and tracking method and system
CN117152213B (en) * 2023-09-14 2024-07-16 西南科技大学 Fuzzy target detection and tracking method and system

Also Published As

Publication number Publication date
CN106127776B (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN106127776B (en) It is identified based on multiple features space-time context robot target and moves decision-making technique
CN111428765B (en) Target detection method based on global convolution and local depth convolution fusion
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN105184309B (en) Classification of Polarimetric SAR Image based on CNN and SVM
CN113486764B (en) Pothole detection method based on improved YOLOv3
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN104182772A (en) Gesture recognition method based on deep learning
CN110929578A (en) Anti-blocking pedestrian detection method based on attention mechanism
CN108985269A (en) Converged network driving environment sensor model based on convolution sum cavity convolutional coding structure
CN104217214A (en) Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method
CN103942557B (en) A kind of underground coal mine image pre-processing method
CN105513093B (en) A kind of method for tracking target represented based on low-rank matrix
CN109934158A (en) Video feeling recognition methods based on local strengthening motion history figure and recursive convolution neural network
CN108182447A (en) A kind of adaptive particle filter method for tracking target based on deep learning
CN105005789B (en) A kind of remote sensing images terrain classification method of view-based access control model vocabulary
CN103473542A (en) Multi-clue fused target tracking method
CN105405136A (en) Self-adaptive spinal CT image segmentation method based on particle swarm optimization
CN109325502A (en) Shared bicycle based on the progressive extracted region of video parks detection method and system
CN106338733A (en) Forward-looking sonar object tracking method based on frog-eye visual characteristic
CN109799829B (en) Robot group cooperative active sensing method based on self-organizing mapping
CN110322075A (en) A kind of scenic spot passenger flow forecast method and system based on hybrid optimization RBF neural
CN107610159A (en) Infrared small object tracking based on curvature filtering and space-time context
Majeed et al. Uncertain fuzzy self-organization based clustering: interval type-2 fuzzy approach to adaptive resonance theory
CN104036238B (en) The method of the human eye positioning based on active light
CN103985139B (en) Particle filter target tracking method based on color model and prediction vector cluster model information fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190503