CN108364304A - A kind of system and method for the detection of monocular airborne target - Google Patents

A kind of system and method for the detection of monocular airborne target Download PDF

Info

Publication number
CN108364304A
CN108364304A CN201810322623.9A CN201810322623A CN108364304A CN 108364304 A CN108364304 A CN 108364304A CN 201810322623 A CN201810322623 A CN 201810322623A CN 108364304 A CN108364304 A CN 108364304A
Authority
CN
China
Prior art keywords
detection
monocular
computer system
target
airborne
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810322623.9A
Other languages
Chinese (zh)
Inventor
罗瑶
郑卫民
谭献良
李志学
颜紫科
肖敏
杨楠
张曦
陈益平
周松林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan City University
Original Assignee
Hunan City University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan City University filed Critical Hunan City University
Priority to CN201810322623.9A priority Critical patent/CN108364304A/en
Publication of CN108364304A publication Critical patent/CN108364304A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to air vehicle technique fields, disclose a kind of system and method for the detection of monocular airborne target, including:Navigation elements, including at least one Inertial Measurement Unit;Image camera captures two width or multiple image of scene around unmanned plane;The image camera and navigation elements being connect with computer system;Computer system executes aerial target, detection algorithm and calculating process, and for two or more picture frames using navigation, computer is computed rear each two or multiple images frame to generate a compensation background image;From motion compensation background image Sequence Detection moving object;Laser range finder is connect with computer system, and object is detected to one or more for providing range measurement;The flight control system being connected with computer system;For realizing the control of attitude of flight vehicle.

Description

A kind of system and method for the detection of monocular airborne target
Technical field
The invention belongs to air vehicle technique field more particularly to a kind of devices and side for the detection of monocular airborne target Method.
Background technology
Unmanned plane (UAV) needs to perceive and avoid presence and the position of barrier, so that safely guidance path is to complete Task.Unmanned plane also needs to detect other airborne objects from ambient enviroment.However, power and the limitation image of weight nobody The detection technology that may be used in machine.Stereo-picture processing needs the repeatability of imaging sensor, can be used for determining to capture The image of the range of airborne object.In addition, in numerous applications, being passed for the image of required depth resolution using stereo-picture Separation needed for sensor has been more than useful size (for example, span).Single sensor technology, such as radar, laser radar and millimeter Wave radar (and being the power supply needed for the power supply of this equipment) is typically too heavy, is not used to light-duty unmanned aerial vehicle design.Existing patent: A kind of scene reconstruction method and device CN201610859387.5 based on mobile device monocular camera;It provides a kind of based on movement The scene reconstruction method and device of equipment monocular camera.With same smart machine monocular camera (containing single camera) in different positions Set the thought of shooting Same Scene same object to simulate binocular stereo vision, i.e.,:The smart machine tl moment (first time) is clapped According to position be set as 1 present position of camera, smart machine t2 moment (for the second time) picture-taking position is set as 2 present position of camera; Data processing is carried out by shooting image to the two moment and obtains monocular camera inside and outside parameter, then dense is regarded by what is obtained Feel that difference obtains the vision difference of any position of tl moment images, carries out 3 D scene rebuilding.One kind based on monocular camera and The industrial robot grasping means CN201610807413.X of three-dimensional force sensor;It provides a kind of based on monocular camera and three-dimensional The industrial robot grasping means of force snesor, simulates the vision of people and tactilely-perceptible system realizes robot to object Work is captured, using six degree of freedom articulated type industrial robot as execution unit, environment sensing, three are carried out using monocular camera Dimensional force sensor controls the method that robot adjusts posture, efficiently solves object identification equipment cost height, puts and want to object Ask stringent the problem of waiting particular determinations.It is a kind of based on the silicon MEMS gyro estimation error of monocular vision sensor and bearing calibration CN201610740714.5.It provides a kind of based on the silicon MEMS gyro estimation error of monocular vision sensor and bearing calibration.It should Method is using low-cost silicon MEMS gyro and monocular vision sensor as measurement device, using the think of of Kalman filter information fusion Road, real-time estimation corrects silicon MEMS gyro error, to improve the precision of inertial navigation and flight control system in full flight course. The present invention can be used in any UAV Navigation System comprising monocular vision sensor and baby's MEMS gyro.
In conclusion problem of the existing technology is:The constraint of current unmanned acc power and Weight control, limits The stereo-picture processing that sensing technology can be used for unmanned aerial vehicle onboard needs to acquire image for imaging sensor, it is not possible to for determining Aerial target within the scope of one;The stereo-picture of the separation needed for imaging sensor using to(for) required depth resolution is more than Useful size (for example, span);Single sensor technology, such as radar, laser radar and millimetre-wave radar are typically too heavy, can not For light-duty unmanned aerial vehicle design.
Invention content
In view of the problems of the existing technology, the present invention provides a kind of system for the detection of monocular airborne target and sides Method.
The invention is realized in this way a kind of system for the detection of monocular airborne target, described to be used for the airborne mesh of monocular Marking the system detected includes:
Navigation elements, including at least one Inertial Measurement Unit (IMU), Global Navigation Satellite System (GNSS), other lead Boat system;
Image camera, two width or multiple image for capturing scene around unmanned plane;
The image camera and navigation elements being connect with computer system;Computer system executes aerial target, detection algorithm And calculating process, two or more picture frames are using navigation, and computer is computed rear each two or multiple images frame comes Generate a compensation background image;From motion compensation background image Sequence Detection moving object;
Laser range finder is connect with computer system, and object is detected to one or more for providing range measurement;
The flight control system being connected with computer system;For realizing the flying method of aircraft, speed, height, appearance The control of state etc..
Further, the difference between lap of the computer system by detecting motion compensation background image sequence To detect the mobile object from motion compensation background image sequence;And guidance or flight control system are coupled to department of computer science In system, the information of the mobile object of midcourse guidance or flight control system based on the background image Sequence Detection from motion compensation come Adjust track.
Further, the motion compensation background image sequence of each moving target of the computer system output state vector Identification, wherein state vector describes at least one position and estimated.
Further, wherein using particle filter, in extended Kalman filter or noiseless Kalman filter at least One is estimated shape vector.
Further, the compensation of kinematic system, computer is using particle filter, extended Kalman filter or without Kalman At least one of filter tracks one or more Moving Objects.
It is used for monocular another object of the present invention is to provide a kind of system for the detection of monocular airborne target The method of airborne target detection, the method for the detection of monocular airborne target include:
Capture two width or multiple image of scene around unmanned plane;Measure and navigation information and two or more images Use inertial sensor correlation;It calculates, the computer system used, the second picture frame of first frame and two or more picture frames Between first time transition, using the relevant navigation information of two or more images;Generate motion compensated image sequence On the basis of the first frame image of first fundamental matrix applied project the second frame image;
From the motion compensation background image Sequence Detection mobile object;Based on motion compensation background image sequence and navigation Information estimates the location information for the mobile object around unmanned plane.
Fig. 6 is the flow for the moving Object Detection method based on image stored in computer-readable medium equipment Figure.This method captures two or more images of the scene around unmanned plane since 410.This method proceeds to 420, surveys Amount navigation information associated with two or more images of inertial sensor are used.This method use and two or more images Associated navigation information converts to calculate first between the first picture frame of two or more picture frames and the second picture frame (for example, fundamental matrix), proceeds to 430.Fundamental matrix is schemed as the transformation between any two picture frame, and according to two As the associated navigation information of frame calculates.When applied to the first picture frame, fundamental matrix will generate image projection, the image Projective representation is located at apparent in the time point of the second picture frame appears in the scene that the first picture frame is shot from the angle of camera The image projection of the object of unlimited distance, photographing image frame.Therefore, fundamental matrix indicates camera 110 in shooting first and second How to be rotated between picture frame.This method proceeds to 440, becomes to bring based on application first first picture frame is projected the again In two picture frames, motion compensation background image sequence is generated.This method from the background image Sequence Detection of motion compensation by transporting It moves object and proceeds to 450.In motion compensation background image sequence, when projecting again on the second picture frame, it is located at the Any static object of apparent infinite point in one picture frame will be Chong Die with its own.That is, in the first picture frame First position at each static object of apparent infinite point be converted into first picture frame using fundamental matrix It will finally be projected again in itself on the second picture frame after reprojection, such as motion compensation background image sequence.It compares Under, mobile object or than apparent infinity closer to object will appear in multiple positions.Based on motion compensation background image sequence Row and navigation information, this method proceed to 460, estimate the location information for the moving object around unmanned plane.At one In embodiment, object that broad sense 3D replay or structure-will be detected from Motion Technology in the track of camera and camera image The knowledge of sequence be combined together, the position of object in three dimensions is calculated in a manner of similar to stereoscopic vision and is set again Meter calculates the depth of object using the relative distance between camera.In one embodiment of the invention, estimated location includes By the trace information in the unmanned plane with auto-navigation, different time and position capture images obtain depth from a camera Information.Conflict in order to prevent, which is applied to the specific mobile object of above-mentioned identification to determine their position (that is, relative to local reference frame or navigation frame).Therefore, in one embodiment, this method proceeds to 470, is based on position Information changes the route of unmanned plane.
Further, the method for the detection of monocular airborne target further comprises:It is artificial being introduced from during boat It encourages track.
Further, the method for the detection of monocular airborne target further comprises:The aerial target that tracing detection arrives; And provide the estimated value one by one or both of the movement velocity vector sum collision time of aerial target detected.
Advantages of the present invention and good effect are:The constraint for breaching power and quality control using image camera and is used to Property sensor measurement techniques can obtain the location information of aerial moving object, for determining aerial target.It is carried on the back from motion compensation In scape image sequence, computer system can distinguish aerial moving object and static background object, in order to which unmanned plane changes in time Become or keep flight path.
Description of the drawings
Fig. 1 is the system structure diagram provided in an embodiment of the present invention for the detection of monocular airborne target;
In figure:1, navigation elements;2, image camera;3, computer system;4, flight control system;5, laser range finder.
Fig. 2 is the picture frame and navigation information schematic diagram of monocular airborne target detecting system provided in an embodiment of the present invention.
Fig. 3 and Fig. 4 is the schematic diagram of monocular airborne target detection provided in an embodiment of the present invention.
Fig. 5 is the schematic diagram provided in an embodiment of the present invention that the detection of monocular airborne target is realized in unmanned plane.
Fig. 6 is the flow chart of monocular airborne target detection provided in an embodiment of the present invention.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to embodiments, to the present invention It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to Limit the present invention.
The application principle of the present invention is explained in detail below in conjunction with the accompanying drawings.
The system provided in an embodiment of the present invention for the detection of monocular airborne target includes as shown in Figure 1:Navigation elements 1, Image camera 2, computer system 3, flight control system 4, laser range finder 5.
Navigation elements 1, including at least one Inertial Measurement Unit (IMU), Global Navigation Satellite System (GNSS), other lead Boat system;
Image camera 2, two width or multiple image for capturing scene around unmanned plane;
The image camera 2 and navigation elements 1 being connect with computer system 3;Computer system 3 executes aerial target, detection Algorithm and calculating process, two or more picture frames are computed rear each two or multiple images using navigation, computer Frame compensates background image to generate one;From motion compensation background image Sequence Detection moving object;
Laser range finder 5 is connect with computer system 3, and object is detected to one or more for providing range measurement;
The flight control system 4 being connected with computer system;For realizing the flying method of aircraft, speed, height, appearance The control of state etc..
The application principle of the present invention is further described below in conjunction with the accompanying drawings.
The embodiment of the present invention provides simple eye airborne target detection method
In one embodiment, a kind of to include for detecting the system of mobile object around unmanned plane:Image camera; Navigation elements include at least Inertial Measurement Unit;And it is coupled to the computer system of image camera and navigation elements.Computer System executes airborne object detection Processing Algorithm, and is led using associated with each in two or more picture frames Boat information calculates the transformation between two or more picture frames captured by image camera, to generate the background of motion compensation Image sequence.Computer system detects mobile object from motion compensation background image sequence.
The embodiment of the present invention solves self-conductance endurance unmanned plane and around it while can detect mobile and stationary body It needs, while avoiding usual weight associated with previously known object detection scheme and power requirement.Although main logical Example below crossing discusses unmanned plane, but those of ordinary skill in the art read this specification and will be understood that, implementation of the invention Example is not limited to unmanned plane.It is recognized as in the scope of embodiments of the invention including the other embodiment in land and water, with And as the vehicle or ship remotely driven.The embodiment of the present invention passes through the multiple images and unmanned plane that will be captured from vehicle-mounted vidicon Inertial data is combined to realize the detection of mobile object (such as airborne object), and mobile object can be detected to generate to have The image sequence of motion compensation background.As it is used herein, term " camera " is shot using any frequency spectrum of camera sensitivity The generic term of any device of image, and on the space projection to two dimensional surface that will be observed that.For example, art as used herein Language, camera may all or part of spectrum sensitive visible to the mankind, or optionally, the spectrum of higher or lower frequency. Alternatively, term is the same as used in this article, camera further includes based on the form of energy in addition to photon luminous energy, by observation sky Between project to the device in two-dimentional Riemann manifold.
Fig. 1 is the block diagram of the detecting system 100 for the airborne object for indicating an embodiment of the invention.System 100 is wrapped Image camera 110 is included, the navigation elements 112 of Inertial Measurement Unit (IMU) 115 are included at least and is executed as described below by imaging phase The computer system 120 of the analysis for the information that machine 110 and navigation elements 112 obtain.In one embodiment, system 100 is tied It closes in such as unmanned plane shown in 105 (UAV).
In operation, imaging camera 110 captures multiple images frame.For each picture frame of shooting,
IMU 115 when every frame is captured capture inertia measurement data (that is, accelerometer and gyro data) or nobody The related navigational information (i.e. inertial navigation system information) of machine 105.For the purpose of this specification, both will be referred to as " leading Boat information ", but it is to be understood that can use any kind of sports immunology as limitation and range in used term. Figure as shown in the figure.1B is generally 150, and for multiple images frame, (each in frame 1 to frame n) exists for the reference frame Associated navigation information.Computer system 120 carrys out the movement of compensating image camera 110 using the data from IMU115, To generate the image of herein referred as motion compensation background image sequence.From motion compensation background image sequence, computer system 120 can distinguish mobile object and static background object.
Using the navigation information from IMU 115, computer system 120 is for example appropriate with the form calculus of fundamental matrix Transformation.Fundamental matrix, and can be from the associated of two picture frames as the transformation matrix between any two picture frame Navigation information calculates.When applied to the first picture frame, fundamental matrix will generate image projection, how indicate in the second image The frame captured moment occurs the scene that the first picture frame is shot from the angle of camera, it is assumed that the baseline phase between camera frame Than all objects in scene are all in apparent affinity.Therefore, fundamental matrix indicates camera 110 in shooting first and the How to be rotated between two picture frames.The embodiment of the present invention overlaps onto multiple captures by the frame for projecting one or more again Picture frame in selected one on create the background image sequence of motion compensation.In other embodiments, using base This matrix, season mathematics, transformation vector field or other expressions convert to calculate.
Fig. 3 provides the frame re-projection of one embodiment of the present of invention of the background image sequence for creating motion compensation Example 200.The capture of video camera 110 is generally with the first image (F 1) shown in 210, and navigation system 112 measures and works as F 1 The associated navigation information (N 1) at captured time point.Camera 110 then captures usually with the second image shown in 212 (F 2), while associated navigation information (N 2) is measured by navigation system 112.It is basic using any applicable known calculating One of method of matrix calculates fundamental matrix FM 1,2 (being generally shown in 214) from F 1, N 1, F 2 and N 2.For example, at one In embodiment, fundamental matrix is calculated using F=K1-T [t] × RK-1, wherein [] x be crossed product matrix indicate (with it is inclined Skew symmetric matrix is multiplied) R is spin matrix, t is translation vector, and K is the intrinsic calibration matrix of camera.The calculating of fundamental matrix is discussed An available reference by Hartley, R.And Zisserman,
A., Multiple View Geometry, Vol.4, Cambridge University Press, 2000, lead to It crosses and is incorporated herein by reference.FM 1,2 is applied to the first image F 1 generations F 1'(usually to show with 220), it provides and works as shooting figure When as frame F 2 from the vantage point of camera 110 occur the first image F 1 reprojection scene in all objects be all located at it is bright Aobvious unlimited distance.Motion compensation background image sequence 222 is generated in conjunction with the result of F 1' and F 2.In motion compensation Background As in sequence, the projection again of any stationary body should be Chong Die with itself in F 2 in F 1'.First in the first picture frame Each static object at position, will be finally in motion compensation after being converted into the re-projection using the frame 1 of fundamental matrix That is observed in background image sequence projects onto itself again on the second picture frame, it is assumed that all background objects are all located at bright Aobvious unlimited distance.On the contrary, mobile object or the object than apparent infinity (usually being indicated with 230) will appear in fortune on the contrary At the generally change location as shown in 235 in dynamic compensation background image sequence 222.The projection again of moving object in F 1' It will not be Chong Die with itself in F 2.Therefore, the moving object 230 in F 1' is appeared in from the vantage point of F 2 to video camera Position, but it is captured in time F 1.The result is that F 1' depict the previous position of mobile object 230, and F 2 depicts object 230 proximal most position.Therefore, when F 1' and F 2 is overlapped, any object in movement will occur twice.In one embodiment In, occur in motion compensation background image sequence more than one object by calculate F 1' and F 2 between difference (such as By using such as XOR function) it identifies, as shown by 224.This will show the object or another frame shown in general framework Both frame, but do not show, as indicated by 236.The result identifies the feature from each picture frame, and the image is not with covariant It changes, therefore is not static or positioned at the position than apparent infinity.
Fig. 3 illustrates, using the object in two image recognition movements, as shown in the figure to scheme.2B can use any number of figure Picture.For example, as shown in 255, the picture frame (F 1 to Fn) of its associated navigation information (N 1 to Nn) is captured.Made using frame Fn For the target frame to be projected, fundamental matrix FM 1, n is calculated, and by the way that FM 1, n are generated replay F'1, n applied to F 1.With Identical mode frame F 2...Fn-1 is projected again into F'2, n...F'n-1, n.Fundamental matrix FM 1, n to FM n-1, n Calculating will again project in selected target image frame Fn per frame respectively.Processing (usually being shown at 265) remaps F'1, n to F'n-1, n (usually being indicated with 260) and Fn (as shown in 261), to generate the background image sequence 270 of motion compensation. As described above, the object (i.e. background object) positioned at virtual infinite point will be projected onto itself again.On the contrary, mobile object Multiple positions will be appeared in.To F'1, n to F'n-1, subtraction or XOR function allow to move by turn for the set application of n and Fn images The differentiation of dynamic object and static object.
Once mobile object is identified, computer system 120 makes (the such as, but not limited to 3D of broad sense by known method Again projection or structure) estimate that position (three-dimensional) this field of one or more of the image from capture moving object is general Logical technical staff is known from Motion Technology.That is, computer system 120 is not only identified in the presence of by certain in image The aerial sports object that pixel indicates also determines that moving object is located at the position in three dimensions.The structure of Motion Technology includes The sequence of the object detected in the knowledge and camera review of video camera track calculates object in three-dimensional in a similar way Position in space, this calculates the depth of object with stereo reconstruction using the relative distance between camera.The one of the present invention In a embodiment, computer system 120 is captured by the different time of the knowledge in the trace information with unmanned plane and position Image obtains depth information from a video camera.The depth information is applied to the specific on-board object of above-mentioned identification, with determination The relative position (i.e. relative to the local referential of unmanned plane) of mobile object, to avoid collision or separation ensures.
In one embodiment, navigation system 112 optionally includes the global navigation satellite for being coupled to computer system 120 System (GNSS) receiver 117, to further increase the trace information of the position for the object that can be used for confirmly detecting.Pass through packet The GNSS enhancing trace informations of the tracks identification UAV are included as the reference to selected reference frame (for example, world coordinates), are calculated Machine system 120 can the local referential for UAV or the aerial object frame relative to navigation identification movement.GNSS receiver 117 increase the navigation information that can be used for computer system 120 by providing unmanned plane relative to the absolute position of navigation frame.Such as What those of ordinary skill in the art were understood after reading this description, other navigation system in addition to gnss can be used for carrying For the information.Therefore, in one embodiment, navigation system 112 includes one or more of the other navigation sensor 119, to increase Add the trace information of the position for the mobile object that can be used for confirmly detecting.In addition it is possible to use other motion estimation techniques come Supplement navigation system 112 as a result, to improve the accuracy of solution provided by computer system 120.
In one embodiment, computer system 120 is carried in the form of the state vector of each Moving Objects detected For its solution, its estimated location is at least described.When navigation system 112 provides satellite-based navigation information, by calculating The state vector that machine system 120 calculates can refer to global navigation frame.In one embodiment, by one or more state vectors It is supplied to the guiding/flight-control computer 125 for being coupled to computer system 120.125 basis of guiding/flight-control computer The information about the mobile object detected that state vector provides can start escape or mitigate to unmanned plane during flying process Adjustment.Optionally, flight controller 125 is also based on the mobile object that detects to based on the station on ground or other nobody Machine sends alert message.In one embodiment, computer system 120 is further programmed to the airborne object that tracing detection arrives And the estimation of additivity is provided, such as movement velocity vector, collision time or other states.In another embodiment, into The track of the undetected airborne object of One-step Extrapolation is to estimate collision probability.
As those of ordinary skill in the art are understood after reading this description, dependent on the airborne right of motion detection As one of the ultimate challenge of detection scheme is to detect object during the direct collision with observer.When airborne object is with constant Speed fly and in collision process flyer viewing angle (from each airborne object observation when) viewing angle not Variation.Therefore, the position of the object in motion compensation background image sequence will keep identical, and the therefore seemingly fixed back of the body Scenery body rather than the aerial object of movement.The clue provided from motion compensation background image sequence is pair in image The size of elephant will increase with each sequential picture.But the size of this variation may not be able to be found in time, to take row It is dynamic.
In order to solve the detection of object during direct collision, one embodiment of the present of invention encourages artificial track It is introduced into UAV flight paths, to realize the slight difference of the observation perspective of the consecutive image shot by camera.This excitation can To change the flight road of unmanned plane including the speed of unmanned plane is for example moved or changed using the natural mode of aircraft Line it is linear.Different observation viewpoints makes it possible to establish datum line (at a distance from vertical with the kinematic axis of UAV), this is to estimate The distance of object is required.In one embodiment, flight computer 125 is periodically introducing this in UAV flight paths Kind deviation so that computer system 120 can find being potentially present of for object in the collision process with UAV.Implement at one In example, once identifying potential collision process object, the frequency of deviation, amplitude and direction will be increased, so as to corresponding Better baseline is established on direction.
In some embodiments, unmanned plane further includes the laser range finder 111 of variable hardness.In such an embodiment, one Denier recognizes potential collision threat, and UAV can verify the presence of the threat detected using hardenable laser range finder And measure the distance of object to be detected.In other embodiments, the object detected using described method with come from it The detection fusion of his transducer series (radar, transponder etc.), to increase the complete of solution by using complementary characteristic Property.In embodiments, sensor fusion is based respectively on extended Kalman filter, uncented Kalman filter device and particle filter Wave device.Then use identical filter as estimator, the object to detect provides extension movement mode
Foregoing description provide Utopian examples, wherein it is assumed that background object is located at virtual infinite point.But this is simultaneously It is not always effectively to assume.It, cannot be by by the image on the ground of video camera shooting for example, when unmanned plane is close to ground flying It is considered at virtual infinite point.And this deviation of ideal situation is introduced into motion compensation background image sequence.Comparing In the case of the movement of the interested aerial object detected, this dispersion is inappreciable degree, as long as being no more than base In the scheduled threshold value of the task of unmanned plane, it can simply ignore it.Alternatively, can handle one of in several ways than virtual Infinity closer to background.
In one embodiment, the picture frame of each capture is divided into smaller section by computer system 120, and uses institute The vision content of acquisition refines the movement of this section, the related navigational information of assistant images.Assuming that unmanned plane works as phase close to ground When machine moves between successive frames, background can be considered as one close to mobile object by it.Handle the office of adjacent image segments Portion's patch and the background that the segment image segment is determined using trace information (direction of translation and rotation) computer system 120 The direction that (such as ground) is moved in the segment.Capture ground section in picture frame in feature, and with the patch In the identical speed in ground and direction seem mobile, it is considered to be a part for background, rather than airborne object.
Alternatively, in one embodiment, computer system 120 dynamically changes the frame speed of the image captured by camera 110 Rate and/or abandon some frames.For example, in one embodiment, speed of the computer system 120 based on UAV is (for example, by INS Known to 115 information provided) adjust the image frame rate of video camera 110 so that and background will appear in virtual infinite.Appoint What than background closer to object not at infinity, therefore in the background image sequence for projecting to motion compensation again when, can With visible.In one embodiment, using clutter and primary image frame in motion compensation background image sequence amount as Increase or decrease the basis of image frame per second.For example, giving two picture frames and its relevant navigation information, fundamental matrix is calculated. From fundamental matrix, the projection again of first image is generated.Then the projection again of first image and second image is found Between difference (such as passing through subtraction or XOR).Determine that the sampling period is in view of the clutter amount that you can identify from this difference It is no short enough or background cannot be met be in the remote context of virtual unlimited.In another alternate embodiment, the common skill in this field Other schemes known to art personnel, such as light stream compensation, can be used for compensating has in the picture frame of capture than virtual infinity Closer to static background object.
In one embodiment, when (for example, in group of formation or loose coordination) flies multiple UAV in a coordinated fashion When row, then two or more UAV can calculate mesh via wireless data link shared information, and using shared information 's.Figure.3 show such embodiment of the present invention comprising multiple UAV (usually being shown at 305).In a reality It applies in example, each in UAV 305 is equipped with the system diagram for detecting such as aerial object of system 100.1.For example, In one embodiment, the first UAV 310 identifies airborne object 320 and distributes unique identifier to the object 320.UAV Then 310 calculate the state vector of the object with reference to global navigation frame.With second on the collision process of airborne object 320 UAV 315 can be imported wirelessly (usually to be shown by data link 325) from the available state vector of the first unmanned plane, with It calculates it and arrives itself distance of object 320.Alternatively, in another embodiment, the 2nd UAV 315 can be imported by the first UAV The raw image data and navigation information associated with image data of 310 captures, to calculate the shape of oneself of its object 320 State vector, so that it is determined that it arrives object 320.
The system and method that the several method discussed in the present specification can be used for realizing the present invention.These methods include but It is not limited to digital computing system, microprocessor, all-purpose computer, programmable controller and field programmable gate array (FPGA).For example, in one embodiment, computer system 120 is realized by FPGA or ASIC or embeded processor.Therefore, The other embodiment of the present invention is resident in the program instruction on computer-readable medium, when realizing in this way, it Can realize the embodiment of the present invention.Computer-readable medium includes any type of physical computer storage device.It is this The example of physical computer memory device includes but not limited to punch card, disk or tape, and optical data memory system, flash memory is only Reading memory (ROM), non-volatile ROM, programming ROM (PROM), erasable memory programming ROM (E-PROM), at random Access memory (RAM) or permanent, semipermanent or temporary memory storage system or the equipment of any other form.Program instruction Including but not limited to by computer system processor and such as very high speed integrated circuit (VHSIC) hardware description language (VHDL) The computer executable instructions that hardware description language executes.
Fig. 6 is the flow for the moving Object Detection method based on image stored in computer-readable medium equipment Figure.This method captures two or more images of surrounding's scene around self-conductance endurance unmanned plane since 410.This method Proceed to 420, measurement navigation information associated with two or more images of inertial sensor are used.This method use and two The associated navigation information of a or multiple images come calculate two or more picture frames the first picture frame and the second picture frame it Between first transformation (for example, fundamental matrix), proceed to 430.Fundamental matrix as the transformation between any two picture frame, and And it is calculated according to the associated navigation information of two picture frames.When applied to the first picture frame, fundamental matrix will generate figure As projection, which indicates that the time point in the second picture frame appears in the field that the first picture frame is shot from the angle of camera Positioned at the image projection of the object of apparent unlimited distance, photographing image frame in scape.Therefore, fundamental matrix indicates that camera 110 is being clapped It takes the photograph between the first and second picture frames and how to rotate.This method proceeds to 440, is become based on application first and is brought the first picture frame Again it projects in the second picture frame, generates motion compensation background image sequence.This method passes through the Background from motion compensation Proceed to 450 as Sequence Detection Moving Objects.In motion compensation background image sequence, when projecting the second picture frame again When upper, any static object for being located at the apparent infinite point in the first picture frame will be Chong Die with its own.That is, Each static object of apparent infinite point at first position in one picture frame is being converted into the using fundamental matrix It will finally be projected again in itself on the second picture frame after the reprojection of one picture frame, such as motion compensation background image Sequence.In contrast, mobile object or than apparent infinity closer to object will appear in multiple positions.It is carried on the back based on motion compensation Scape image sequence and navigation information, this method proceed to 460, estimate that the position for the moving object around unmanned plane is believed Breath.In one embodiment, broad sense 3D replays or structure-will be detected from Motion Technology in the track of camera and camera image To the knowledge of sequence of object be combined together, the position of object in three dimensions is calculated in a manner of similar to stereoscopic vision Set the depth for redesigning and calculating object using the relative distance between camera.In one embodiment of the invention, estimate Position includes different time and position capture images by the knowledge in the trace information with self-conductance endurance unmanned plane from one Camera obtains depth information.Conflict in order to prevent, which is applied to the specific mobile object of above-mentioned identification with true They fixed position (that is, relative to local reference frame or navigation frame).Therefore, in one embodiment, this method proceeds to 470, it is based on location information change of flight route.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention All any modification, equivalent and improvement etc., should all be included in the protection scope of the present invention made by within refreshing and principle.

Claims (8)

1. a kind of system for the detection of monocular airborne target, which is characterized in that described to be for the detection of monocular airborne target System includes:
Navigation elements, including at least one Inertial Measurement Unit, Global Navigation Satellite System, other navigation system;
Image camera, two width or multiple image for capturing scene around unmanned plane;
The image camera and navigation elements being connect with computer system;Computer system executes aerial target, detection algorithm and meter Calculation process, two or more picture frames are computed rear each two or multiple images frame to generate using navigation, computer One compensation background image;From motion compensation background image Sequence Detection moving object;
Laser range finder is connect with computer system, and object is detected to one or more for providing range measurement;
The flight control system being connected with computer system;For realizing the flying method of aircraft, speed, height, posture Control.
2. the system for the detection of monocular airborne target as described in claim 1, which is characterized in that the computer system is logical The difference crossed between the lap of detection motion compensation background image sequence is detected from motion compensation background image sequence Mobile object;And guidance or flight control system are coupled in computer system, midcourse guidance or flight control system base Track is adjusted in the information of the mobile object of the background image Sequence Detection from motion compensation.
3. the system for the detection of monocular airborne target as described in claim 1, which is characterized in that the computer system is defeated Do well vector each moving target motion compensation background image sequence identification, wherein state vector describes at least one Estimated position.
4. the system for the detection of monocular airborne target as claimed in claim 2, which is characterized in that the compensation of kinematic system, At least one of particle filter, Extended Kalman filter and Unscented kalman filtering are used when wherein state vector is estimated.
5. the system for the detection of monocular airborne target as claimed in claim 2, which is characterized in that the compensation of kinematic system, The particle filter that wherein computer tracking one or more mobile object uses, Extended Kalman filter or Unscented kalman filter Wave it is at least one.
6. a kind of side for the detection of monocular airborne target for the system of monocular airborne target detection as described in claim 1 Method, which is characterized in that it is described for monocular airborne target detection method include:
Capture two width or multiple image of surrounding scene;Measure and navigation information is passed with two or more images using inertia Sensor is related;It calculates, the computer system used, first between first frame and the second picture frame of two or more picture frames Secondary transition, using the relevant navigation information of two or more images;It is answered on the basis of generation motion compensated image sequence The first frame image of first fundamental matrix projects the second frame image;
Computer system executes airborne object detection Processing Algorithm, and uses and each in two or more picture frames Associated navigation information calculates the transformation between two or more picture frames captured by image camera, to generate movement The background image sequence of compensation;Computer system detects mobile object from motion compensation background image sequence.
7. the method for the detection of monocular airborne target as claimed in claim 6, which is characterized in that described airborne for monocular The method of target detection further comprises:It is encouraged introducing artificial track from during boat.
8. the method for the detection of monocular airborne target as claimed in claim 6, which is characterized in that described airborne for monocular The method of target detection further comprises:The aerial target that tracing detection arrives;And provide the movement speed of the aerial target detected Spend the estimated value of vector sum collision time.
CN201810322623.9A 2018-04-11 2018-04-11 A kind of system and method for the detection of monocular airborne target Pending CN108364304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810322623.9A CN108364304A (en) 2018-04-11 2018-04-11 A kind of system and method for the detection of monocular airborne target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810322623.9A CN108364304A (en) 2018-04-11 2018-04-11 A kind of system and method for the detection of monocular airborne target

Publications (1)

Publication Number Publication Date
CN108364304A true CN108364304A (en) 2018-08-03

Family

ID=63008093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810322623.9A Pending CN108364304A (en) 2018-04-11 2018-04-11 A kind of system and method for the detection of monocular airborne target

Country Status (1)

Country Link
CN (1) CN108364304A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110174093A (en) * 2019-05-05 2019-08-27 腾讯科技(深圳)有限公司 Localization method, device, equipment and computer readable storage medium
CN110320508A (en) * 2019-07-04 2019-10-11 上海航天测控通信研究所 A kind of analogy method of airborne answering machine to high dynamic target property
CN111610160A (en) * 2020-06-02 2020-09-01 新疆天链遥感科技有限公司 System for detecting concentration of volatile organic compound through remote sensing satellite
WO2020221307A1 (en) * 2019-04-29 2020-11-05 华为技术有限公司 Method and device for tracking moving object
CN112567201A (en) * 2018-08-21 2021-03-26 深圳市大疆创新科技有限公司 Distance measuring method and apparatus
CN112639803A (en) * 2018-08-21 2021-04-09 西门子能源环球有限责任两合公司 Method and assembly for identifying objects at a facility
CN112906777A (en) * 2021-02-05 2021-06-04 北京邮电大学 Target detection method and device, electronic equipment and storage medium
TWI757700B (en) * 2020-03-05 2022-03-11 國立政治大學 Aircraft route following method
CN114463504A (en) * 2022-01-25 2022-05-10 清华大学 Monocular camera-based roadside linear element reconstruction method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110178658A1 (en) * 2010-01-20 2011-07-21 Honeywell International Inc. Systems and methods for monocular airborne object detection
CN102147462A (en) * 2010-02-09 2011-08-10 中国科学院电子学研究所 System and method for realizing motion compensation of UAV (unmanned aerial vehicle)-borne synthetic aperture radar
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN106052584A (en) * 2016-05-24 2016-10-26 上海工程技术大学 Track space linear shape measurement method based on visual and inertia information fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110178658A1 (en) * 2010-01-20 2011-07-21 Honeywell International Inc. Systems and methods for monocular airborne object detection
CN102147462A (en) * 2010-02-09 2011-08-10 中国科学院电子学研究所 System and method for realizing motion compensation of UAV (unmanned aerial vehicle)-borne synthetic aperture radar
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN106052584A (en) * 2016-05-24 2016-10-26 上海工程技术大学 Track space linear shape measurement method based on visual and inertia information fusion

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112567201B (en) * 2018-08-21 2024-04-16 深圳市大疆创新科技有限公司 Distance measuring method and device
CN112639803B (en) * 2018-08-21 2024-06-11 西门子能源环球有限责任两合公司 Method and assembly for identifying objects at a facility
US11989870B2 (en) 2018-08-21 2024-05-21 Siemens Energy Global GmbH & Co. KG Method and assembly for detecting objects on systems
CN112567201A (en) * 2018-08-21 2021-03-26 深圳市大疆创新科技有限公司 Distance measuring method and apparatus
CN112639803A (en) * 2018-08-21 2021-04-09 西门子能源环球有限责任两合公司 Method and assembly for identifying objects at a facility
WO2020221307A1 (en) * 2019-04-29 2020-11-05 华为技术有限公司 Method and device for tracking moving object
CN110174093B (en) * 2019-05-05 2022-10-28 腾讯科技(深圳)有限公司 Positioning method, device, equipment and computer readable storage medium
CN110174093A (en) * 2019-05-05 2019-08-27 腾讯科技(深圳)有限公司 Localization method, device, equipment and computer readable storage medium
CN110320508B (en) * 2019-07-04 2022-11-29 上海航天测控通信研究所 Method for simulating high dynamic target characteristics by airborne responder
CN110320508A (en) * 2019-07-04 2019-10-11 上海航天测控通信研究所 A kind of analogy method of airborne answering machine to high dynamic target property
TWI757700B (en) * 2020-03-05 2022-03-11 國立政治大學 Aircraft route following method
CN111610160A (en) * 2020-06-02 2020-09-01 新疆天链遥感科技有限公司 System for detecting concentration of volatile organic compound through remote sensing satellite
CN112906777A (en) * 2021-02-05 2021-06-04 北京邮电大学 Target detection method and device, electronic equipment and storage medium
CN114463504A (en) * 2022-01-25 2022-05-10 清华大学 Monocular camera-based roadside linear element reconstruction method, system and storage medium

Similar Documents

Publication Publication Date Title
CN108364304A (en) A kind of system and method for the detection of monocular airborne target
EP3321888B1 (en) Projected image generation method and device, and method for mapping image pixels and depth values
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
Matthies et al. Stereo vision-based obstacle avoidance for micro air vehicles using disparity space
US9097532B2 (en) Systems and methods for monocular airborne object detection
US11906983B2 (en) System and method for tracking targets
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
WO2018163898A1 (en) Free viewpoint movement display device
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
CN112567201A (en) Distance measuring method and apparatus
Strydom et al. Visual odometry: autonomous uav navigation using optic flow and stereo
US10527423B1 (en) Fusion of vision and depth sensors for navigation in complex environments
JP2012118666A (en) Three-dimensional map automatic generation device
CN208323361U (en) A kind of positioning device and robot based on deep vision
JP6880822B2 (en) Equipment, mobile equipment and methods
JP2020126612A (en) Method and apparatus for providing advanced pedestrian assistance system for protecting pedestrian using smartphone
US20210311195A1 (en) Vision-cued random-access lidar system and method for localization and navigation
Khan et al. Ego-motion estimation concepts, algorithms and challenges: an overview
El-Hakim et al. A mobile system for indoors 3-D mapping and positioning
Veth et al. Two-dimensional stochastic projections for tight integration of optical and inertial sensors for navigation
CN208314856U (en) A kind of system for the detection of monocular airborne target
Bhanu et al. Inertial navigation sensor integrated motion analysis for obstacle detection
CN100582653C (en) System and method for determining position posture adopting multi- bundle light
Klavins et al. Unmanned aerial vehicle movement trajectory detection in open environment
Aminzadeh et al. Implementation and performance evaluation of optical flow navigation system under specific conditions for a flying robot

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination