CN109196556A - Barrier-avoiding method, device and moveable platform - Google Patents

Barrier-avoiding method, device and moveable platform Download PDF

Info

Publication number
CN109196556A
CN109196556A CN201780029125.9A CN201780029125A CN109196556A CN 109196556 A CN109196556 A CN 109196556A CN 201780029125 A CN201780029125 A CN 201780029125A CN 109196556 A CN109196556 A CN 109196556A
Authority
CN
China
Prior art keywords
characteristic point
light stream
moving object
moveable platform
stream vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780029125.9A
Other languages
Chinese (zh)
Inventor
周游
杜劼熹
刘洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Shenzhen Dajiang Innovations Technology Co Ltd
Original Assignee
Shenzhen Dajiang Innovations Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dajiang Innovations Technology Co Ltd filed Critical Shenzhen Dajiang Innovations Technology Co Ltd
Publication of CN109196556A publication Critical patent/CN109196556A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0004Transmission of traffic-related information to or from an aircraft
    • G08G5/0013Transmission of traffic-related information to or from an aircraft with a ground station
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0021Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located in the aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0026Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • G08G5/0078Surveillance aids for monitoring traffic from the aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • G08G5/045Navigation or guidance aids, e.g. determination of anti-collision manoeuvers
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present invention provides a kind of barrier-avoiding method, device and moveable platform, wherein this method comprises: obtaining the depth map that the capture apparatus shooting that moveable platform carries obtains;Moving object is identified based on the depth map;Determine the movement velocity vector of the moving object;Movement velocity vector based on the moving object, determines the moving object and the region that the moveable platform may collide, and executes Robot dodge strategy in the region that may be collided to control the moveable platform.The embodiment of the present invention can promote safety and the user experience of moveable platform movement.

Description

Barrier-avoiding method, device and moveable platform
Technical field
This application involves air vehicle technique field more particularly to a kind of barrier-avoiding methods, device and moveable platform.
Background technique
As unmanned plane is more more and more universal, more people joined the ranks of unmanned plane.But for before never It used for the user of unmanned plane, operation is a problem, careless slightly that crash is be easy to cause to hit.Therefore, for these use Some auxiliary driving means are needed for family to help user to carry out avoidance.
Summary of the invention
The embodiment of the present invention provides a kind of barrier-avoiding method, device and moveable platform, is based on moving object segmentation to realize Avoidance, promoted moveable platform movement safety and user experience.
The first aspect of the embodiment of the present invention is to provide a kind of barrier-avoiding method, comprising:
Obtain the depth map that the capture apparatus shooting that moveable platform carries obtains;
Moving object is identified based on the depth map;
Determine the movement velocity vector of the moving object;
Movement velocity vector based on the moving object determines that the moving object and the moveable platform may be sent out The region of raw collision executes Robot dodge strategy in the region that may be collided to control the moveable platform.
The second aspect of the embodiment of the present invention is to provide a kind of obstacle avoidance apparatus, and the obstacle avoidance apparatus is arranged in moveable platform On, the obstacle avoidance apparatus includes processor and capture apparatus, and the processor and the capture apparatus communicate to connect;
The capture apparatus obtains depth map for shooting;
The processor is used for: being obtained the depth map that the capture apparatus shooting obtains, is identified based on the depth map Moving object determines the movement velocity vector of the moving object, and the movement velocity vector based on the moving object determines institute Moving object is stated and region that the moveable platform may collide, to control the moveable platform in the possibility The region to collide executes Robot dodge strategy.Determine the moving object and the region that the moveable platform may collide
The third aspect of the embodiment of the present invention is to provide a kind of moveable platform, comprising:
Fuselage;
Dynamical system is mounted on the fuselage, for providing power for the moveable platform;
And obstacle avoidance apparatus described in above-mentioned second aspect.
The fourth aspect of the embodiment of the present invention is to provide a kind of obstacle avoidance apparatus, and the obstacle avoidance apparatus is arranged in earth station, The obstacle avoidance apparatus includes processor and communication interface, and the processor and the communication interface communicate to connect;
The communication interface is used for: obtaining the depth map that the capture apparatus shooting that moveable platform carries obtains;
The processor is used for: being identified moving object based on the depth map, is determined the movement speed of the moving object Vector is spent, the movement velocity vector based on the moving object determines that the moving object and the moveable platform may be sent out The region of raw collision executes Robot dodge strategy in the region that may be collided to control the moveable platform.
Barrier-avoiding method, device and moveable platform provided in an embodiment of the present invention, by obtaining moveable platform carrying The depth map that capture apparatus shooting obtains identifies moving object based on the depth map, and determines the movement velocity of moving object Vector, the movement velocity vector based on moving object, determines moving object and the region that moveable platform may collide, from And it controls moveable platform and executes Robot dodge strategy in the region that may be collided.Since the embodiment of the present invention can be to goer Body is identified, and determines what moving object and moveable platform may collide according to the movement velocity vector of dynamic object Region, so that moveable platform can hide moving object, especially when moveable platform is to transport on the ground When the unmanned plane of dynamic automobile or other loose impediments or near-earth flight, moving object can be effectively excluded to removable The influence of moving platform promotes safety and the user experience of moveable platform movement.
Detailed description of the invention
Fig. 1 is a kind of flow chart of barrier-avoiding method provided in an embodiment of the present invention;
Fig. 2 is the method flow diagram of the light stream vectors provided in an embodiment of the present invention for obtaining target feature point;
Fig. 3 is the flow chart of the recognition methods of moving object provided in an embodiment of the present invention;
Fig. 4 a is the light stream vectors schematic diagram of target feature point provided in an embodiment of the present invention;
Fig. 4 b is the result schematic diagram after the light stream vectors cluster on Fig. 4 a;
Fig. 5 is a kind of structural schematic diagram of obstacle avoidance apparatus 10 provided in an embodiment of the present invention;
Fig. 6 is the structural schematic diagram of obstacle avoidance apparatus 30 provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is clearly retouched It states, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the present invention In embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It should be noted that it can be directly on another component when component is referred to as " being fixed on " another component Or there may also be components placed in the middle.When a component is considered as " connection " another component, it, which can be, is directly connected to To another component or it may be simultaneously present component placed in the middle.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool The purpose of the embodiment of body, it is not intended that in the limitation present invention.Term " and or " used herein includes one or more phases Any and all combinations of the listed item of pass.
With reference to the accompanying drawing, it elaborates to some embodiments of the present invention.In the absence of conflict, following Feature in embodiment and embodiment can be combined with each other.
In the prior art, the moveable platforms such as automobile, unmanned plane are generally basede on following several modes and carry out avoidance:
A kind of mode is using ultrasonic wave, infrared distance measurement module, and this mode is fairly simple, detects that object just brakes, Therefore have a large amount of measurement noise, it be easy to cause and accidentally brakes, and measurement distance is shorter, for the moveable platform of high-speed motion It cannot use completely.
Another way be obtained by vision camera using machine vision algorithm and image and calculate depth map, or Depth map is obtained using modules such as RGB-D or TOF, and then Robot dodge strategy is realized by depth map.But based on depth map Calculate, at present for be all to have done static it is assumed that the object that occurs is both for being stationary in the earth i.e. in hypothesis image. For this moveable platform (such as unmanned plane, but be not limited to unmanned plane) for high-altitude flight, it is assumed that typically set up , but for the moveable platform of flight near the ground, then be it is invalid, vehicles or pedestrians not can be carried out and effectively keep away Barrier.
And the general base of the moveable platform of automatic Pilot (such as autonomous driving vehicle, but be not limited to autonomous driving vehicle) Robot dodge strategy is executed in laser radar, involves great expense and weight is larger.
For above-mentioned technical problem of the existing technology, the embodiment of the present invention provides a kind of barrier-avoiding method, passes through acquisition The depth map that the capture apparatus shooting that moveable platform carries obtains identifies moving object based on the depth map, and determines fortune The movement velocity vector of animal body, the movement velocity vector based on moving object determine that moving object and moveable platform may The region to collide executes Robot dodge strategy in the region that may be collided to control moveable platform.The present embodiment energy The enough identification that moving object is carried out based on depth map, and control moveable platform and avoidance is carried out to moving object, especially when can Mobile platform be the automobile moved on the ground or other loose impediments or near-earth flight unmanned plane when, Neng Gouyou Effect carries out avoidance to moving object, promotes safety and the user experience of moveable platform movement.
Since the present embodiments relate to the processing of depth map, for the ease of the understanding of technical solution of the present invention, lower kept man of a noblewoman First portion of techniques parameter involved in capture apparatus is illustrated, by taking camera as an example:
Camera model:
Wherein: R is the spin matrix of camera;
[u v 1]TIndicate two-dimensional coordinate (2D) point in pixel coordinate system (pixel coordinates);
[xw yw zw 1]TIndicate three-dimensional coordinate (3D) point in earth coordinates (World coordinates);
Matrix K is known as camera calibration matrix (Camera calibration matrix), i.e., the internal reference of each camera (Intrinsic Parameters);
Wherein, matrix K contains in 5 for finite projection camera (Finite projective camera) Ginseng:
Wherein, αx=fmx,F is focal length (focal length), mxAnd myFor unit distance on the direction x, y Pixel number (scale factors).Distortion parameter (skew parameters) of the γ between x-axis and y-axis, in CCD camera Middle pixel is not square.μ0, v0For optical center position (principal point).
Specifically, Fig. 1 is a kind of flow chart of barrier-avoiding method provided in an embodiment of the present invention, this method can be kept away by one kind Fault device executes, which may be mounted on moveable platform, also may be mounted in earth station.As shown in Figure 1, This method comprises:
Step 101 obtains the depth map that the capture apparatus shooting that moveable platform carries obtains.
Optionally, moveable platform involved in the present embodiment including but not limited to it is following any one: unmanned plane, vapour Vehicle, VR glasses, AR glasses.
The capture apparatus that the present embodiment is related to includes at least one camera.
Exemplary, the camera that the present embodiment can be spaced pre-determined distance based on two obtains two width of Same Scene simultaneously Image carries out processing to two images based on Stereo Matching Algorithm and obtains depth map, certainly it is only for illustrating rather than To unique restriction of the invention, in fact, the method that the present embodiment obtains depth map can use it is in the prior art any one Kind method, the present embodiment are not specifically limited.
Step 102 identifies moving object based on the depth map.
Optionally, the present embodiment can be based on target signature by the light stream vectors of target feature point on acquisition depth map The light stream vectors of point identify moving object.
Specifically, the method for obtaining the light stream vectors of target feature point includes at least following several in the present embodiment:
In one possible implementation, due to object most of under general shooting environmental be all it is static, it is only few Number object is movement, and therefore, on the depth map that shooting obtains, the characteristic point on stationary object occupies all special on depth map The overwhelming majority of point is levied, and the quantity of the characteristic point in moving object occupies the exhausted small part of whole characteristic points, and moving object The light stream vectors direction of characteristic point is different from the light stream vectors direction of characteristic point on most of stationary object on body, therefore, can lead to Cross and the light stream vectors of the characteristic point on depth map clustered, will acquire it is above-mentioned absolutely least a portion of characteristic point light stream to Amount, as the light stream vectors of target feature point, specific executive mode may refer to the prior art, repeat no more herein.
In alternatively possible implementation, it can be calculated based on the visual odometry (VO) carried on moveable platform Method, from the depth map that obtains of capture apparatus shooting that moveable platform carries, filter out except the characteristic point on stationary object with The light stream vectors (i.e. the light stream vectors of target feature point) of outer characteristic point.Specifically, Fig. 2 is provided in an embodiment of the present invention obtains The method flow diagram of the light stream vectors of target feature point is taken, as shown in Fig. 2, this method comprises:
Step 1011 is based on preset Corner Detection Algorithm, what the capture apparatus shooting carried from moveable platform obtained Characteristic point is extracted in depth map.
Here in order to reduce calculation amount, the present embodiment is first set from shooting using the method for generating sparse matrix (sparse) Characteristic point is extracted on the depth map that standby shooting obtains.Without loss of generality, the present embodiment selects the angle point (Corner on depth map Detection) it is used as characteristic point, optional Corner Detection Algorithm (Corner Detection Algorithm) includes but not office It is limited to include: FAST (features from accelerated segment test), the inspection of SUSAN and Harris angle point Method of determining and calculating.The present embodiment is by taking Harris Corner Detection Algorithm as an example:
Firstly, defining the fabric tensor (structure tensor) that matrix A is depth map:
Wherein, IxAnd IyGradient information of the point on x, the direction y respectively on depth map can must be then located at based on matrix A The angle point response M of point on depth mapc:
Mc1λ2-k(λ12)2=det (A)-ktrace2(A)
Wherein det (A) is the determinant of matrix A, and trace (A) is the mark of matrix A, and k is the parameter for adjusting sensitivity (tunable sensitivity parameter), given threshold Mth, work as Mc>MthWhen, it is determined that the point is characterized a little.
Step 1012, the relative position by the tracking characteristic point in two field pictures, determine the light of the characteristic point Flow vector.
Optionally, the present embodiment can be by tracking the characteristic point of above-mentioned acquisition in the image more than two frames or two frames Relative position determines the light stream of characteristic point according to the offset of relative position of the characteristic point in two frames or two frame images above Vector.By taking two field pictures as an example, after tracking obtains relative position of the characteristic point in two field pictures, the present embodiment can basis Following expression iteration obtains offset h of the characteristic point in two field pictures:
Herein for each characteristic point, the present embodiment can do offset detection twice, i.e. Schilling a later frame image is F (x), previous frame image is G (x), according to above-mentioned formula iteration obtain relative position of the characteristic point on a later frame image relative to First offset h of relative position of the characteristic point on previous frame image, then in turn, enabling previous frame image is F (x), latter Frame image is G (x), obtains relative position phase relative to characteristic point on a later frame image of the characteristic point on previous frame image To the second offset h ' of position, if the relationship between the first offset and the second offset meets preset first priori item Part, it is determined that the light stream vectors of characteristic point are h, wherein the first priori conditions can be specially h=-h ', but be not limited to h =-h ', actually the first priori conditions can also be specially h=-h '+a, and it is a constant that wherein a, which is default error,.
Certain merely illustrative explanation of the example above rather than to unique restriction of the invention, in fact, each characteristic point It can only determine that an offset, determination method are referred to above-mentioned example, repeat no more herein.
Step 1013, the three-dimensional coordinate based on the characteristic point under the earth coordinates in the two field pictures, from institute State the light stream vectors that the characteristic point on stationary object is screened out in the light stream vectors of characteristic point, obtain the light stream of target feature point to Amount.
It optionally, in one possible implementation, can be based on earth coordinates of the characteristic point in two field pictures Under three-dimensional coordinate, determine position and posture (camera pose) of the capture apparatus when shooting two field pictures, and based on shooting Position and posture of the equipment when shooting two field pictures are obtained and are removed in depth map using random sampling unification algorism (RANSAC) The light stream vectors of the characteristic point other than characteristic point on stationary object, specific algorithm are as follows:
spc=K [R | T] pw.
Wherein, R be using capture apparatus shoot image when position and posture as priori spin matrix.pcIt is characterized a little Two-dimensional coordinate on depth map, pwIt is characterized the three-dimensional coordinate a little under earth coordinates.
In alternatively possible implementation, characteristic point can be determined in two frames according to preset second priori conditions Essential matrix corresponding to the three-dimensional coordinate under earth coordinates in image (Essential Matrix), then obtained based on calculating Essential matrix, using random sampling unification algorism (RANSAC), obtain in depth map except the characteristic point on stationary object with The light stream vectors of outer characteristic point.
As an example it is assumed that three-dimensional coordinate of the characteristic point in two field pictures under corresponding earth coordinates is y and y ', So essential matrix E can be determined according to the second priori conditions as follows:
(y')TEy=0
Further, it then by random sampling unification algorism (RANSAC), can be obtained in depth map except on stationary object Characteristic point other than characteristic point light stream vectors, i.e. light stream vectors of target feature point, wherein random sampling unification algorism Principle and executive mode may refer to the prior art, and the present embodiment does not repeat them here.
Further, after the light stream vectors for obtaining target feature point, light stream vectors, target based on target feature point are special Levy point depth information and target feature point visual information, identify moving object, wherein the visual information include color and/ Or light intensity.
Specifically, Fig. 3 is the flow chart of the recognition methods of moving object provided in an embodiment of the present invention, as shown in figure 3, can To identify moving object according to following steps:
Step 1021 carries out clustering processing to the light stream vectors of the target feature point got, obtains at least one set of light stream Vector Groups, wherein in the deviation of directivity in same optical flow vector set between light stream vectors less than the first preset threshold, light stream vectors Between length difference less than the second preset threshold.
Exemplary, Fig. 4 a is the light stream vectors schematic diagram of target feature point provided in an embodiment of the present invention, the light in Fig. 4 a Flow vector can be as follows with Lucas-Kanade algorithmic notation:
Wherein, qiIt is the point on depth map in the field point P, the field size of point P, which can according to need, to be set, this reality It applies in example without limitation, for example, including 25 points in the corresponding field point P when the field of point P is specially the field of 5x5.qi Corresponding light stream vectors are [Vx Vy]T, Ix,IyRespectively qiGradient on depth map on the direction x and the direction y, ItFor qiTwo Light intensity variation on frame image.
Further, on the basis of above-mentioned light stream vectors indicate, this example uses unsupervised machine learning Clustering algorithm (such as K-Means++ algorithm, but be not limited to K-Means++ algorithm) in (Unsupervised ML) comes pair The light stream vectors of target feature point carry out clustering processing.To the position [u v] by each target feature point on depth mapT, The color and/or light intensity and light stream vectors [V of target feature pointx Vy]TAs cluster foundation, obtain it is as shown in Figure 4 b extremely Lack one group of optical flow vector set, the optical flow vector set after cluster can be expressed as follows:
[u v Vx Vy I(u,v,t)]T
Certainly it above are only and illustrate rather than to unique restriction of the invention.
Step 1022, depth information and visual information based on each characteristic point in the target feature point, from it is described to Moving object is identified in few one group of optical flow vector set.
Optionally, the present embodiment passes through the depth information of each characteristic point in the target feature point got, and each The color and/or light intensity of characteristic point use seed filling (floodfill) algorithm, from the above-mentioned at least one set of light stream vectors obtained Moving object is identified in group.Its specific executive mode is referred to existing floodfill algorithm, repeats no more herein.
Step 103, the movement velocity vector for determining the moving object.
It is exemplary, it can be sat in one possible implementation based on the earth of the moving object in default frame number image Three-dimensional coordinate under mark system, determines the movement velocity vector of moving object.Its specific implementation may refer to the prior art, this Implementation does not repeat.
Step 104, the movement velocity vector based on the moving object determine that the moving object and described move are put down The region that platform may collide executes avoidance plan in the region that may be collided to control the moveable platform Slightly.
Optionally, the present embodiment can the movement speed based on the movement velocity vector sum moveable platform of moving object to Amount determines the motion track of moveable platform using following expression, and is determined and moved based on the motion track of moveable platform The region that object and moveable platform may collide:
Optionally, after the motion track for obtaining moveable platform, the embodiment of the present invention can be by moveable platform Motion track projects on depth map, and according to the motion profile of moving object, and moving object and removable is determined on depth map The region that moving platform may collide, and the coordinate information based on each region on depth map under earth coordinates, really Determine moving object and three-dimensional coordinate under earth coordinates of region that moveable platform may collide.It is removable to control Moving platform executes preset Robot dodge strategy at the three-dimensional coordinate, for example, can control behind the region that determination may collide Moveable platform processed is mobile to the opposite direction of current moving direction, or makes by adjusting the motion track of moveable platform Moveable platform bypasses the region that may be collided, or can control moveable platform and stop mobile predetermined time period, To achieve the purpose of obstacle avoidance.Certainly it is only for illustrate rather than unique restriction to Robot dodge strategy in the present invention, it is practical On, corresponding Robot dodge strategy can be set according to specific needs.
Optionally, in order to increase the interaction between user, user experience is improved, the present embodiment can also be to removable flat The motion track of platform is shown, or the region that may be collided can also be shown to user, so that user takes in time Avoidance measure.
Barrier-avoiding method, device and moveable platform provided in this embodiment, the shooting carried by obtaining moveable platform The light stream vectors for the target feature point on depth map that equipment shooting obtains and the moving speed vector of moveable platform, base In the light stream vectors of the target feature point got and the depth information of target feature point and visual information, know from depth map Not Chu moving object, pass through three-dimensional coordinate of the pursuit movement object under the earth coordinates in default frame number image, determine fortune The movement velocity vector of animal body, thus the movement speed of movement velocity vector and moveable platform based on moving object Vector determines moving object and the region that moveable platform may collide, so that controlling moveable platform may occur The region of collision executes Robot dodge strategy.Since the embodiment of the present invention can identify dynamic object, and according to dynamic object Movement velocity vector determine moving object and the region that moveable platform may collide so that moveable platform energy It is enough that moving object is hidden, especially when moveable platform is the automobile moved on the ground or other loose impediments, Either the unmanned plane of near-earth flight when, can effectively exclude influence of the moving object to moveable platform, be promoted removable flat The safety of platform movement and user experience.In addition, due in existing moveable platform in visual odometry (VO) algorithm Between result can generate the present embodiment needed for target feature point light stream vectors, therefore, the present embodiment can directly acquire existing The intermediate result of visual odometry (VO) algorithm carries out the detection of moving object, can be effectively reduced the calculating of the present embodiment Amount improves the efficiency of avoidance detection, further improves the real-time of avoidance detection.
The embodiment of the present invention provides a kind of obstacle avoidance apparatus, and Fig. 5 is a kind of obstacle avoidance apparatus 10 provided in an embodiment of the present invention Structural schematic diagram, as shown in figure 5, obstacle avoidance apparatus 10 is arranged on moveable platform 20, obstacle avoidance apparatus 10 includes 11 He of processor Capture apparatus 21, the processor 11 are communicated to connect with the capture apparatus 21, and the capture apparatus obtains depth for shooting Figure;The processor 11 is used for: being obtained the depth map that the shooting of capture apparatus 21 obtains, is identified fortune based on the depth map Animal body determines the movement velocity vector of the moving object, the movement velocity vector based on the moving object, described in determination The region that moving object and the moveable platform may collide, to control the moveable platform in the possible hair The region of raw collision executes Robot dodge strategy.Determine the moving object and the region that the moveable platform may collide.
Optionally, the processor 11 is used for: obtaining the light stream vectors of target feature point on the depth map, the target Characteristic point does not include the characteristic point on stationary object, based on the light stream vectors of the target feature point, identifies moving object.
Optionally, the processor 11 is used for: view-based access control model odometer VO algorithm obtains target signature on the depth map The light stream vectors of point.
Optionally, the processor 11 is used for: being based on preset Corner Detection Algorithm, the shooting carried from moveable platform Characteristic point is extracted in the depth map that equipment shooting obtains;By tracking relative position of the characteristic point in two field pictures, really The light stream vectors of the fixed characteristic point;Three-dimensional seat based on the characteristic point under the earth coordinates in the two field pictures Mark, the light stream vectors of the characteristic point on stationary object are screened out from the light stream vectors of the characteristic point, obtain target feature point Light stream vectors.
Optionally, the processor 11 is used for: based on relative position of the characteristic point in two field pictures, being determined respectively Relative position of relative position of the characteristic point on a later frame image relative to the characteristic point on previous frame image The relative position of first offset and the characteristic point on previous frame image is relative to the characteristic point in a later frame image On relative position the second offset;If relationship meets preset between first offset and second offset One priori conditions are then based on first offset or second offset, determine the light stream vectors of the characteristic point.
Optionally, the processor 11 is used for: being based on preset second priori conditions, is determined the characteristic point described two Essential matrix corresponding to the three-dimensional coordinate under earth coordinates in frame image, wherein second priori conditions are characterized Conditional relationship of the point between the three-dimensional coordinate and essential matrix under the earth coordinates in two field pictures;Based on the essential square Battle array, using random sampling unification algorism, screens out the light stream of the characteristic point on stationary object from the light stream vectors of the characteristic point Vector obtains the light stream vectors of target feature point.
Optionally, the processor 11 is used for: based on the characteristic point under the earth coordinates in the two field pictures Three-dimensional coordinate, determine position and posture of the capture apparatus when shooting the two field pictures;Based on the capture apparatus Position and posture when shooting the two field pictures, using random sampling unification algorism, from the light stream vectors of the characteristic point In screen out the light stream vectors of characteristic point on stationary object, obtain the light stream vectors of target feature point.
Optionally, the processor 11 is used for: light stream vectors, the target feature point based on the target feature point The visual information of depth information and the target feature point, identify moving object, wherein the visual information include color and/or Light intensity.
Optionally, the processor 11 is used for: being carried out clustering processing to the light stream vectors of the target feature point got, is obtained Obtain at least one set of optical flow vector set, wherein pre- less than first in the deviation of directivity in same optical flow vector set between light stream vectors If threshold value, the difference of the length between light stream vectors is less than the second preset threshold;Based on each characteristic point in the target feature point Depth information and visual information, identify moving object from least one set of optical flow vector set.
Optionally, the processor 11 is used for: the depth information based on each characteristic point in the target feature point, and The visual information of each characteristic point identifies moving object from least one set of optical flow vector set using seed fill algorithm Body.
Optionally, the processor 11 is used for: based on earth coordinates of the moving object in default frame number image Under three-dimensional coordinate, determine the movement velocity vector of the moving object.
Optionally, the processor 11 is used for: the shifting of movement velocity vector and moveable platform based on moving object Dynamic velocity vector, determines the motion track of the moveable platform;The moving object and institute are determined based on the motion track State the region that moveable platform may collide.
Optionally, the processor 11 is used for: the motion track being projected to the depth map, in the depth map The region that the upper determination moving object and the moveable platform may collide;Based on the depth map determine described in can Three-dimensional coordinate of the region that can be collided under earth coordinates.
Optionally, the processor 11 is used for: control the moveable platform the region that may be collided to The opposite direction of current moving direction is mobile.
Optionally, the processor 11 is used for: the motion track of the moveable platform is adjusted, so that described removable flat Platform bypasses the region that may be collided.
Optionally, the processor 11 is used for: being controlled the moveable platform and is stopped mobile predetermined time period, with evacuation The moving object.
Optionally, the capture apparatus includes at least one camera.
Obstacle avoidance apparatus provided in this embodiment is able to carry out barrier-avoiding method described in above-described embodiment, executive mode and has Beneficial effect is similar, repeats no more herein.
The present embodiment provides a kind of moveable platforms, comprising:
Fuselage;
Dynamical system is mounted on fuselage, for providing power for moveable platform;
And obstacle avoidance apparatus described in above-described embodiment.The moveable platform includes any one following: unmanned plane, vapour Vehicle, VR glasses, AR glasses.
The embodiment of the present invention provides a kind of obstacle avoidance apparatus, and Fig. 6 is the structure of obstacle avoidance apparatus 30 provided in an embodiment of the present invention Schematic diagram, as shown in fig. 6, obstacle avoidance apparatus 30 is arranged in earth station 40, obstacle avoidance apparatus 30 includes processor 31 and communication interface 32, the processor 31 is communicated to connect with the communication interface 32;The communication interface 32 is used for: being obtained moveable platform 50 and is taken The depth map that the shooting of capture apparatus 51 of load obtains;The processor 31 is used for: moving object is identified based on the depth map, The movement velocity vector for determining the moving object, the movement velocity vector based on the moving object, determines the moving object The region that body and the moveable platform may collide, so that controlling the moveable platform may collide described Region execute Robot dodge strategy.
Optionally, the processor 31 is used for: obtaining the light stream vectors of target feature point on the depth map, the target Characteristic point does not include the characteristic point on stationary object, based on the light stream vectors of the target feature point, identifies moving object.
Optionally, the processor 31 is used for: view-based access control model odometer VO algorithm obtains target signature on the depth map The light stream vectors of point.
Optionally, the processor 31 is used for: being based on preset Corner Detection Algorithm, the shooting carried from moveable platform Characteristic point is extracted in the depth map that equipment shooting obtains;By tracking relative position of the characteristic point in two field pictures, really The light stream vectors of the fixed characteristic point;Three-dimensional seat based on the characteristic point under the earth coordinates in the two field pictures Mark, the light stream vectors of the characteristic point on stationary object are screened out from the light stream vectors of the characteristic point, obtain target feature point Light stream vectors.
Optionally, the processor 31 is used for: based on relative position of the characteristic point in two field pictures, being determined respectively Relative position of relative position of the characteristic point on a later frame image relative to the characteristic point on previous frame image The relative position of first offset and the characteristic point on previous frame image is relative to the characteristic point in a later frame image On relative position the second offset;If relationship meets preset between first offset and second offset One priori conditions are then based on first offset or second offset, determine the light stream vectors of the characteristic point.
Optionally, the processor 31 is used for: being based on preset second priori conditions, is determined the characteristic point described two Essential matrix corresponding to the three-dimensional coordinate under earth coordinates in frame image, wherein second priori conditions are characterized Conditional relationship of the point between the three-dimensional coordinate and essential matrix under the earth coordinates in two field pictures;Based on the essential square Battle array, using random sampling unification algorism, screens out the light stream of the characteristic point on stationary object from the light stream vectors of the characteristic point Vector obtains the light stream vectors of target feature point.
Optionally, the processor 31 is used for: based on the characteristic point under the earth coordinates in the two field pictures Three-dimensional coordinate, determine position and posture of the capture apparatus when shooting the two field pictures;Based on the capture apparatus Position and posture when shooting the two field pictures, using random sampling unification algorism, from the light stream vectors of the characteristic point In screen out the light stream vectors of characteristic point on stationary object, obtain the light stream vectors of target feature point.
Optionally, the processor 31 is used for: light stream vectors, the target feature point based on the target feature point The visual information of depth information and the target feature point, identify moving object, wherein the visual information include color and/or Light intensity.
Optionally, the processor 31 is used for: being carried out clustering processing to the light stream vectors of the target feature point got, is obtained Obtain at least one set of optical flow vector set, wherein pre- less than first in the deviation of directivity in same optical flow vector set between light stream vectors If threshold value, the difference of the length between light stream vectors is less than the second preset threshold;Based on each characteristic point in the target feature point Depth information and visual information, identify moving object from least one set of optical flow vector set.
Optionally, the processor 31 is used for: the depth information based on each characteristic point in the target feature point, and The visual information of each characteristic point identifies moving object from least one set of optical flow vector set using seed fill algorithm Body
Optionally, the processor 31 is used for: based on earth coordinates of the moving object in default frame number image Under three-dimensional coordinate, determine the movement velocity vector of the moving object.
Optionally, the processor 31 is used for: the shifting of movement velocity vector and moveable platform based on moving object Dynamic velocity vector, determines the motion track of the moveable platform;The moving object and institute are determined based on the motion track State the region that moveable platform may collide.
Optionally, the processor 31 is used for: the motion track being projected to the depth map, in the depth map The region that the upper determination moving object and the moveable platform may collide;Based on the depth map determine described in can Three-dimensional coordinate of the region that can be collided under earth coordinates.
Optionally, the obstacle avoidance apparatus further includes display component 33, and the display component 33 is communicated with the processor 31 Connection;The display component 33 is used for: showing the motion track of the moveable platform.
Optionally, the display component 33 is used for: showing the depth map, and may described in the depth chart display The region to collide.
Optionally, the processor 31 is used for: control the moveable platform the region that may be collided to The opposite direction of current moving direction is mobile.
Optionally, the processor 31 is used for: the motion track of the moveable platform is adjusted, so that described removable flat Platform bypasses the region that may be collided.
Optionally, the processor 31 is used for: being controlled the moveable platform and is stopped mobile predetermined time period, with evacuation The moving object.
Optionally, the capture apparatus includes at least one camera.
Obstacle avoidance apparatus provided in this embodiment is able to carry out barrier-avoiding method described in above-described embodiment, executive mode and has Beneficial effect is similar, repeats no more herein.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another system is closed or is desirably integrated into, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or logical of device or unit Letter connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the present invention The part steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read- Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. it is various It can store the medium of program code.
Those skilled in the art can be understood that, for convenience and simplicity of description, only with above-mentioned each functional module Division progress for example, in practical application, can according to need and above-mentioned function distribution is complete by different functional modules At the internal structure of device being divided into different functional modules, to complete all or part of the functions described above.On The specific work process for stating the device of description, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (57)

1. a kind of barrier-avoiding method characterized by comprising
Obtain the depth map that the capture apparatus shooting that moveable platform carries obtains;
Moving object is identified based on the depth map;
Determine the movement velocity vector of the moving object;
Movement velocity vector based on the moving object determines that the moving object and the moveable platform may touch The region hit executes Robot dodge strategy in the region that may be collided to control the moveable platform.
2. being wrapped the method according to claim 1, wherein described identify moving object based on the depth map It includes:
The light stream vectors of target feature point on the depth map are obtained, the target feature point does not include the feature on stationary object Point;
Based on the light stream vectors of the target feature point, moving object is identified.
3. according to the method described in claim 2, it is characterized in that, the light stream for obtaining target feature point on the depth map Vector, including
View-based access control model odometer VO algorithm, obtains the light stream vectors of target feature point on the depth map.
4. according to the method described in claim 3, it is characterized in that, the view-based access control model odometer VO algorithm, obtains the depth Degree schemes the light stream vectors of upper target feature point, comprising:
Based on preset Corner Detection Algorithm, extracted from the depth map that the capture apparatus shooting that moveable platform carries obtains special Sign point;
By tracking relative position of the characteristic point in two field pictures, the light stream vectors of the characteristic point are determined;
Three-dimensional coordinate based on the characteristic point under the earth coordinates in the two field pictures, from the light stream of the characteristic point The light stream vectors that the characteristic point on stationary object is screened out in vector obtain the light stream vectors of target feature point.
5. according to the method described in claim 4, it is characterized in that, described by tracking the characteristic point in two field pictures Relative position determines the light stream vectors of the characteristic point, comprising:
Relative position based on the characteristic point in two field pictures determines phase of the characteristic point on a later frame image respectively The first offset and the characteristic point of relative position to position relative to the characteristic point on previous frame image are preceding Second offset of relative position of the relative position relative to the characteristic point on a later frame image on one frame image;
If relationship meets preset first priori conditions between first offset and second offset, based on described First offset or second offset, determine the light stream vectors of the characteristic point.
6. according to the method described in claim 4, it is characterized in that, it is described based on the characteristic point in the two field pictures Three-dimensional coordinate under earth coordinates, screened out from the light stream vectors of the characteristic point light stream of the characteristic point on stationary object to Amount, obtains the light stream vectors of target feature point, comprising:
Based on preset second priori conditions, three-dimensional of the characteristic point under the earth coordinates in the two field pictures is determined Essential matrix corresponding to coordinate, wherein second priori conditions are characterized under the earth coordinates a little in two field pictures Three-dimensional coordinate and essential matrix between conditional relationship;
Resting is screened out from the light stream vectors of the characteristic point using random sampling unification algorism based on the essential matrix The light stream vectors of characteristic point on body obtain the light stream vectors of target feature point.
7. according to the method described in claim 4, it is characterized in that, it is described based on the characteristic point in the two field pictures Three-dimensional coordinate under earth coordinates, screened out from the light stream vectors of the characteristic point light stream of the characteristic point on stationary object to Amount, obtains the light stream vectors of target feature point, comprising:
Three-dimensional coordinate based on the characteristic point under the earth coordinates in the two field pictures determines that the capture apparatus exists Position and posture when shooting the two field pictures;
Based on position of the capture apparatus when shooting the two field pictures and posture, using random sampling unification algorism, from The light stream vectors that the characteristic point on stationary object is screened out in the light stream vectors of the characteristic point, obtain the light stream of target feature point to Amount.
8. according to the method described in claim 2, it is characterized in that, the light stream vectors based on the target feature point, know Other moving object, comprising:
The depth information of light stream vectors, the target feature point based on the target feature point and the view of the target feature point Feel information, moving object is identified, wherein the visual information includes color and/or light intensity.
9. according to the method described in claim 8, it is characterized in that, the light stream vectors based on the target feature point, institute The depth information of target feature point and the visual information of the target feature point are stated, identifies moving object, comprising: to what is got The light stream vectors of target feature point carry out clustering processing, obtain at least one set of optical flow vector set, wherein in same optical flow vector set The deviation of directivity between middle light stream vectors is less than the first preset threshold, and the difference of the length between light stream vectors is less than the second default threshold Value;
Based on the depth information and visual information of each characteristic point in the target feature point, from least one set of light stream vectors Moving object is identified in group.
10. according to the method described in claim 9, it is characterized in that, described based on each characteristic point in the target feature point Depth information and visual information, identify moving object from least one set of optical flow vector set, comprising:
The visual information of depth information and each characteristic point based on each characteristic point in the target feature point, using kind Sub- filling algorithm identifies moving object from least one set of optical flow vector set.
11. the method according to claim 1, wherein the movement velocity vector of the determination moving object, Include:
Three-dimensional coordinate based on the moving object under the earth coordinates in default frame number image, determines the moving object Movement velocity vector.
12. the method according to claim 1, wherein the movement velocity vector based on the moving object, Determine the moving object and the region that the moveable platform may collide, comprising:
The moving speed vector of movement velocity vector and moveable platform based on moving object determines described removable flat The motion track of platform;
The moving object and the region that the moveable platform may collide are determined based on the motion track.
13. according to the method for claim 12, which is characterized in that described to determine the moving object based on the motion track The region that body and the moveable platform may collide, comprising:
The motion track is projected on the depth map, the moving object and described removable is determined on the depth map The region that moving platform may collide;
Three-dimensional coordinate of the region that may be collided under earth coordinates is determined based on the depth map.
14. according to the method for claim 12, which is characterized in that the method also includes:
Show the motion track of the moveable platform.
15. according to according to the method for claim 13, which is characterized in that the method also includes:
The region that shows the depth map, and may collide described in the depth chart display.
16. method described in any one of -15 according to claim 1, which is characterized in that the control moveable platform exists The region that may be collided executes Robot dodge strategy, comprising:
It is mobile to the opposite direction of current moving direction in the region that may be collided to control the moveable platform.
17. method described in any one of -15 according to claim 1, which is characterized in that the control moveable platform exists The region that may be collided executes Robot dodge strategy, comprising:
The motion track of the moveable platform is adjusted, so that the moveable platform bypasses the area that may be collided Domain.
18. method described in any one of -15 according to claim 1, which is characterized in that the control moveable platform exists The region that may be collided executes Robot dodge strategy, comprising:
It controls the moveable platform and stops mobile predetermined time period, to avoid the moving object.
19. method described in any one of -18 according to claim 1, which is characterized in that the capture apparatus includes at least one Camera.
20. a kind of obstacle avoidance apparatus, which is characterized in that the obstacle avoidance apparatus is arranged on moveable platform, the obstacle avoidance apparatus packet Processor and capture apparatus are included, the processor and the capture apparatus communicate to connect;
The capture apparatus obtains depth map for shooting;
The processor is used for: being obtained the depth map that the capture apparatus shooting obtains, is identified movement based on the depth map Object determines the movement velocity vector of the moving object, and the movement velocity vector based on the moving object determines the fortune The region that animal body and the moveable platform may collide, to control the moveable platform in the possible generation The region of collision executes Robot dodge strategy.
21. obstacle avoidance apparatus according to claim 20, which is characterized in that the processor is used for:
The light stream vectors of target feature point on the depth map are obtained, the target feature point does not include the feature on stationary object Point identifies moving object based on the light stream vectors of the target feature point.
22. obstacle avoidance apparatus according to claim 21, which is characterized in that the processor is used for: view-based access control model odometer VO algorithm obtains the light stream vectors of target feature point on the depth map.
23. obstacle avoidance apparatus according to claim 22, which is characterized in that the processor is used for: being based on preset angle point Detection algorithm extracts characteristic point from the depth map that the capture apparatus shooting that moveable platform carries obtains;By described in tracking Relative position of the characteristic point in two field pictures, determines the light stream vectors of the characteristic point;Based on the characteristic point described two The three-dimensional coordinate under earth coordinates in frame image, screens out the feature on stationary object from the light stream vectors of the characteristic point The light stream vectors of point, obtain the light stream vectors of target feature point.
24. obstacle avoidance apparatus according to claim 23, which is characterized in that the processor is used for: being based on the characteristic point Relative position in two field pictures determines relative position of the characteristic point on a later frame image relative to the spy respectively Opposite position of the first offset and the characteristic point of relative position of the sign point on previous frame image on previous frame image Set the second offset of the relative position relative to the characteristic point on a later frame image;If first offset and described Relationship meets preset first priori conditions between second offset, then based on first offset or second offset Amount, determines the light stream vectors of the characteristic point.
25. obstacle avoidance apparatus according to claim 23, which is characterized in that the processor is used for: being based on preset second Priori conditions determine the characteristic point essential square corresponding to the three-dimensional coordinate under the earth coordinates in the two field pictures Battle array, wherein second priori conditions are characterized three-dimensional coordinate and essential square under the earth coordinates a little in two field pictures Conditional relationship between battle array;Based on the essential matrix, using random sampling unification algorism, from the light stream vectors of the characteristic point In screen out the light stream vectors of characteristic point on stationary object, obtain the light stream vectors of target feature point.
26. obstacle avoidance apparatus according to claim 23, which is characterized in that the processor is used for: being based on the characteristic point Three-dimensional coordinate under the earth coordinates in the two field pictures determines the capture apparatus when shooting the two field pictures Position and posture;Based on position of the capture apparatus when shooting the two field pictures and posture, using random sampling one Algorithm is caused, the light stream vectors of the characteristic point on stationary object are screened out from the light stream vectors of the characteristic point, obtains target signature The light stream vectors of point.
27. obstacle avoidance apparatus according to claim 21, which is characterized in that the processor is used for: special based on the target Light stream vectors, the depth information of the target feature point and the visual information of the target feature point for levying point, identify moving object Body, wherein the visual information includes color and/or light intensity.
28. obstacle avoidance apparatus according to claim 27, which is characterized in that the processor is used for: to the target got The light stream vectors of characteristic point carry out clustering processing, obtain at least one set of optical flow vector set, wherein the light in same optical flow vector set The deviation of directivity between flow vector is less than the first preset threshold, and the difference of the length between light stream vectors is less than the second preset threshold; Based on the depth information and visual information of each characteristic point in the target feature point, from least one set of optical flow vector set Identify moving object.
29. obstacle avoidance apparatus according to claim 28, which is characterized in that the processor is used for: special based on the target The depth information of each characteristic point and the visual information of each characteristic point in sign point, using seed fill algorithm, from it is described to Moving object is identified in few one group of optical flow vector set.
30. obstacle avoidance apparatus according to claim 20, which is characterized in that the processor is used for: being based on the moving object Three-dimensional coordinate of the body under the earth coordinates in default frame number image, determines the movement velocity vector of the moving object.
31. obstacle avoidance apparatus according to claim 20, which is characterized in that the processor is used for: based on moving object The moving speed vector of movement velocity vector and moveable platform determines the motion track of the moveable platform;Based on institute It states motion track and determines the moving object and the region that the moveable platform may collide.
32. obstacle avoidance apparatus according to claim 31, which is characterized in that the processor is used for: by the motion track It projects on the depth map, determines that the moving object and the moveable platform may collide on the depth map Region;Three-dimensional coordinate of the region that may be collided under earth coordinates is determined based on the depth map.
33. the obstacle avoidance apparatus according to any one of claim 20-32, which is characterized in that the processor is used for: control The moveable platform is mobile to the opposite direction of current moving direction in the region that may be collided.
34. the obstacle avoidance apparatus according to any one of claim 20-32, which is characterized in that the processor is used for: adjustment The motion track of the moveable platform, so that the moveable platform bypasses the region that may be collided.
35. the obstacle avoidance apparatus according to any one of claim 20-32, which is characterized in that the processor is used for: control The moveable platform stops mobile predetermined time period, to avoid the moving object.
36. the obstacle avoidance apparatus according to any one of claim 20-35, which is characterized in that the capture apparatus includes extremely A few camera.
37. a kind of moveable platform characterized by comprising
Fuselage;
Dynamical system is mounted on the fuselage, for providing power for the moveable platform;
And the obstacle avoidance apparatus as described in any one of claim 20-36.
38. the moveable platform according to claim 37, which is characterized in that the mobile platform includes following any one Kind: unmanned plane, automobile, VR glasses, AR glasses.
39. a kind of obstacle avoidance apparatus, which is characterized in that the obstacle avoidance apparatus is arranged in earth station, and the obstacle avoidance apparatus includes place Device and communication interface are managed, the processor and the communication interface communicate to connect;
The communication interface is used for: obtaining the depth map that the capture apparatus shooting that moveable platform carries obtains;
The processor is used for: moving object is identified based on the depth map, determine the movement velocity of the moving object to Amount, the movement velocity vector based on the moving object determine that the moving object and the moveable platform may touch The region hit executes Robot dodge strategy in the region that may be collided to control the moveable platform.
40. obstacle avoidance apparatus according to claim 39, which is characterized in that the processor is used for: obtaining the depth map The light stream vectors of upper target feature point, the target feature point do not include the characteristic point on stationary object, special based on the target The light stream vectors for levying point, identify moving object.
41. obstacle avoidance apparatus according to claim 40, which is characterized in that the processor is used for: view-based access control model odometer VO algorithm obtains the light stream vectors of target feature point on the depth map.
42. obstacle avoidance apparatus according to claim 41, which is characterized in that the processor is used for: being based on preset angle point Detection algorithm extracts characteristic point from the depth map that the capture apparatus shooting that moveable platform carries obtains;By described in tracking Relative position of the characteristic point in two field pictures, determines the light stream vectors of the characteristic point;Based on the characteristic point described two The three-dimensional coordinate under earth coordinates in frame image, screens out the feature on stationary object from the light stream vectors of the characteristic point The light stream vectors of point, obtain the light stream vectors of target feature point.
43. obstacle avoidance apparatus according to claim 42, which is characterized in that the processor is used for: being based on the characteristic point Relative position in two field pictures determines relative position of the characteristic point on a later frame image relative to the spy respectively Opposite position of the first offset and the characteristic point of relative position of the sign point on previous frame image on previous frame image Set the second offset of the relative position relative to the characteristic point on a later frame image;If first offset and described Relationship meets preset first priori conditions between second offset, then based on first offset or second offset Amount, determines the light stream vectors of the characteristic point.
44. obstacle avoidance apparatus according to claim 42, which is characterized in that the processor is used for: being based on preset second Priori conditions determine the characteristic point essential square corresponding to the three-dimensional coordinate under the earth coordinates in the two field pictures Battle array, wherein second priori conditions are characterized three-dimensional coordinate and essential square under the earth coordinates a little in two field pictures Conditional relationship between battle array;Based on the essential matrix, using random sampling unification algorism, from the light stream vectors of the characteristic point In screen out the light stream vectors of characteristic point on stationary object, obtain the light stream vectors of target feature point.
45. obstacle avoidance apparatus according to claim 42, which is characterized in that the processor is used for: being based on the characteristic point Three-dimensional coordinate under the earth coordinates in the two field pictures determines the capture apparatus when shooting the two field pictures Position and posture;Based on position of the capture apparatus when shooting the two field pictures and posture, using random sampling one Algorithm is caused, the light stream vectors of the characteristic point on stationary object are screened out from the light stream vectors of the characteristic point, obtains target signature The light stream vectors of point.
46. obstacle avoidance apparatus according to claim 40, which is characterized in that the processor is used for: special based on the target Light stream vectors, the depth information of the target feature point and the visual information of the target feature point for levying point, identify moving object Body, wherein the visual information includes color and/or light intensity.
47. obstacle avoidance apparatus according to claim 46, which is characterized in that the processor is used for: to the target got The light stream vectors of characteristic point carry out clustering processing, obtain at least one set of optical flow vector set, wherein the light in same optical flow vector set The deviation of directivity between flow vector is less than the first preset threshold, and the difference of the length between light stream vectors is less than the second preset threshold; Based on the depth information and visual information of each characteristic point in the target feature point, from least one set of optical flow vector set Identify moving object.
48. obstacle avoidance apparatus according to claim 47, which is characterized in that the processor is used for: special based on the target The depth information of each characteristic point and the visual information of each characteristic point in sign point, using seed fill algorithm, from it is described to Moving object is identified in few one group of optical flow vector set.
49. obstacle avoidance apparatus according to claim 39, which is characterized in that the processor is used for: being based on the moving object Three-dimensional coordinate of the body under the earth coordinates in default frame number image, determines the movement velocity vector of the moving object.
50. obstacle avoidance apparatus according to claim 39, which is characterized in that the processor is used for: based on moving object The moving speed vector of movement velocity vector and moveable platform determines the motion track of the moveable platform;Based on institute It states motion track and determines the moving object and the region that the moveable platform may collide.
51. obstacle avoidance apparatus according to claim 50, which is characterized in that the processor is used for: by the motion track It projects on the depth map, determines that the moving object and the moveable platform may collide on the depth map Region;Three-dimensional coordinate of the region that may be collided under earth coordinates is determined based on the depth map.
52. obstacle avoidance apparatus according to claim 50, which is characterized in that the obstacle avoidance apparatus further includes display component, institute Display component is stated to connect with the processor communication;
The display component is used for: showing the motion track of the moveable platform.
53. obstacle avoidance apparatus according to claim 52, which is characterized in that the display component is used for: showing the depth Figure, and the region that may be collided described in the depth chart display.
54. the obstacle avoidance apparatus according to any one of claim 39-53, which is characterized in that the processor is used for: control The moveable platform is mobile to the opposite direction of current moving direction in the region that may be collided.
55. the obstacle avoidance apparatus according to any one of claim 39-53, which is characterized in that the processor is used for: adjustment The motion track of the moveable platform, so that the moveable platform bypasses the region that may be collided.
56. the obstacle avoidance apparatus according to any one of claim 39-53, which is characterized in that the processor is used for: control The moveable platform stops mobile predetermined time period, to avoid the moving object.
57. the obstacle avoidance apparatus according to any one of claim 39-56, which is characterized in that the capture apparatus includes extremely A few camera.
CN201780029125.9A 2017-12-29 2017-12-29 Barrier-avoiding method, device and moveable platform Pending CN109196556A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/120249 WO2019127518A1 (en) 2017-12-29 2017-12-29 Obstacle avoidance method and device and movable platform

Publications (1)

Publication Number Publication Date
CN109196556A true CN109196556A (en) 2019-01-11

Family

ID=64948918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780029125.9A Pending CN109196556A (en) 2017-12-29 2017-12-29 Barrier-avoiding method, device and moveable platform

Country Status (3)

Country Link
US (1) US20210103299A1 (en)
CN (1) CN109196556A (en)
WO (1) WO2019127518A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247557A (en) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 Method and system for detecting moving target object and movable platform
CN111656294A (en) * 2019-05-31 2020-09-11 深圳市大疆创新科技有限公司 Control method and control terminal of movable platform and movable platform

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460511B2 (en) * 2016-09-23 2019-10-29 Blue Vision Labs UK Limited Method and system for creating a virtual 3D model
CN111338382B (en) * 2020-04-15 2021-04-06 北京航空航天大学 Unmanned aerial vehicle path planning method guided by safety situation
US11545039B2 (en) * 2020-07-28 2023-01-03 Ford Global Technologies, Llc Systems and methods for controlling an intersection of a route of an unmanned aerial vehicle
KR20220090597A (en) * 2020-12-22 2022-06-30 한국전자기술연구원 Location tracking device and method using feature matching
CN113408353B (en) * 2021-05-18 2023-04-07 杭州电子科技大学 Real-time obstacle avoidance system based on RGB-D
CN113657164B (en) * 2021-07-15 2024-07-02 美智纵横科技有限责任公司 Method, device, cleaning device and storage medium for calibrating target object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881881A (en) * 2014-02-27 2015-09-02 株式会社理光 Method and apparatus for expressing motion object
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
US9558584B1 (en) * 2013-07-29 2017-01-31 Google Inc. 3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
CN106527468A (en) * 2016-12-26 2017-03-22 德阳科蚁科技有限责任公司 Unmanned aerial vehicle obstacle avoidance control method and system thereof, and unmanned aerial vehicle
CN106920259A (en) * 2017-02-28 2017-07-04 武汉工程大学 A kind of localization method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707724B (en) * 2012-06-05 2015-01-14 清华大学 Visual localization and obstacle avoidance method and system for unmanned plane
CN105759829A (en) * 2016-04-12 2016-07-13 深圳市龙云创新航空科技有限公司 Laser radar-based mini-sized unmanned plane control method and system
CN105974938B (en) * 2016-06-16 2023-10-03 零度智控(北京)智能科技有限公司 Obstacle avoidance method and device, carrier and unmanned aerial vehicle
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object
CN106127788B (en) * 2016-07-04 2019-10-25 触景无限科技(北京)有限公司 A kind of vision barrier-avoiding method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558584B1 (en) * 2013-07-29 2017-01-31 Google Inc. 3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
CN104881881A (en) * 2014-02-27 2015-09-02 株式会社理光 Method and apparatus for expressing motion object
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106527468A (en) * 2016-12-26 2017-03-22 德阳科蚁科技有限责任公司 Unmanned aerial vehicle obstacle avoidance control method and system thereof, and unmanned aerial vehicle
CN106920259A (en) * 2017-02-28 2017-07-04 武汉工程大学 A kind of localization method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111247557A (en) * 2019-04-23 2020-06-05 深圳市大疆创新科技有限公司 Method and system for detecting moving target object and movable platform
WO2020215194A1 (en) * 2019-04-23 2020-10-29 深圳市大疆创新科技有限公司 Method and system for detecting moving target object, and movable platform
CN111656294A (en) * 2019-05-31 2020-09-11 深圳市大疆创新科技有限公司 Control method and control terminal of movable platform and movable platform
WO2020237609A1 (en) * 2019-05-31 2020-12-03 深圳市大疆创新科技有限公司 Movable platform control method, control terminal and movable platform

Also Published As

Publication number Publication date
WO2019127518A1 (en) 2019-07-04
US20210103299A1 (en) 2021-04-08

Similar Documents

Publication Publication Date Title
CN109196556A (en) Barrier-avoiding method, device and moveable platform
US10665115B2 (en) Controlling unmanned aerial vehicles to avoid obstacle collision
Banerjee et al. Online camera lidar fusion and object detection on hybrid data for autonomous driving
CN110312912B (en) Automatic vehicle parking system and method
US20180322646A1 (en) Gaussian mixture models for temporal depth fusion
Sanket et al. Gapflyt: Active vision based minimalist structure-less gap detection for quadrotor flight
Carrio et al. Onboard detection and localization of drones using depth maps
Alvarez et al. Collision avoidance for quadrotors with a monocular camera
JP2019536170A (en) Virtually extended visual simultaneous localization and mapping system and method
CN110362098A (en) Unmanned plane vision method of servo-controlling, device and unmanned plane
CN108898628A (en) Three-dimensional vehicle object's pose estimation method, system, terminal and storage medium based on monocular
Yang et al. Reactive obstacle avoidance of monocular quadrotors with online adapted depth prediction network
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
Ding et al. Vehicle pose and shape estimation through multiple monocular vision
Van Pham et al. Vision‐based absolute navigation for descent and landing
Angelopoulou et al. Vision-based egomotion estimation on FPGA for unmanned aerial vehicle navigation
Shi et al. Uncooperative spacecraft pose estimation using an infrared camera during proximity operations
Liu et al. A new approach for the estimation of non-cooperative satellites based on circular feature extraction
Oreifej et al. Horizon constraint for unambiguous uav navigation in planar scenes
Zuehlke et al. Vision-based object detection and proportional navigation for UAS collision avoidance
Cigla et al. Image-based visual perception and representation for collision avoidance
Zhu et al. A hybrid relative navigation algorithm for a large–scale free tumbling non–cooperative target
US10977810B2 (en) Camera motion estimation
Dubey et al. Droan-disparity-space representation for obstacle avoidance: Enabling wire mapping & avoidance
Yan et al. Horizontal velocity estimation via downward looking descent images for lunar landing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111