WO2021195886A1 - 距离确定方法、可移动平台及计算机可读存储介质 - Google Patents

距离确定方法、可移动平台及计算机可读存储介质 Download PDF

Info

Publication number
WO2021195886A1
WO2021195886A1 PCT/CN2020/082199 CN2020082199W WO2021195886A1 WO 2021195886 A1 WO2021195886 A1 WO 2021195886A1 CN 2020082199 W CN2020082199 W CN 2020082199W WO 2021195886 A1 WO2021195886 A1 WO 2021195886A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance
image
target
target object
movable platform
Prior art date
Application number
PCT/CN2020/082199
Other languages
English (en)
French (fr)
Inventor
刘洁
周游
覃政科
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/082199 priority Critical patent/WO2021195886A1/zh
Priority to CN202080005139.9A priority patent/CN112771575A/zh
Publication of WO2021195886A1 publication Critical patent/WO2021195886A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • This application relates to the technical field of distance measurement, in particular to a distance determination method, a movable platform, and a computer-readable storage medium.
  • the mobile platform is equipped with a Time Of Flight (TOF) ranging device.
  • the TOF ranging device can emit light pulses from the transmitter to the object, and the receiver can calculate the light pulse from the transmitter to the object, and then The pixel format returns to the runtime of the receiver to determine the distance between the movable platform and the target object.
  • the distance measured by the TOF distance measuring device is more accurate, but in scenes with highly reflective objects, such as road signs, the TOF distance measuring device will experience periodic aliasing, resulting in the incorrect distance measured by the TOF distance measuring device. Accurate, easy to bring some influence to the movable platform, for example, the distance measured by the TOF distance measuring device for the movable platform to avoid obstacles.
  • the movable platform will not be able to Accurate obstacle avoidance cannot guarantee the safety of the movable platform. Therefore, how to accurately measure the distance between the movable platform and the object is a problem to be solved urgently.
  • the present application provides a method for determining a distance, a movable platform, and a computer-readable storage medium, aiming to accurately measure the distance between the movable platform and an object.
  • the present application provides a method for determining distance, which is applied to a movable platform, wherein the movable platform includes a vision sensor and a TOF ranging device, and the TOF ranging device includes a transmitter for emitting light signals.
  • the method includes:
  • the target distance between the movable platform and the target object is determined according to the plurality of second distances and the first distance.
  • the present application also provides a movable platform including a vision sensor, a TOF distance measuring device, a memory, and a processor, and the processor is connected to the vision sensor and the TOF distance measuring device;
  • the TOF distance measuring device includes a transmitting device for transmitting light signals and a receiving device for receiving light signals reflected by a target object;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • the target distance between the movable platform and the target object is determined according to the plurality of second distances and the first distance.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes what is provided in the specification of this application. Steps of any distance determination method.
  • the embodiment of the present application provides a method for determining a distance, a movable platform, and a computer-readable storage medium.
  • the first distance between the movable platform and a target object is determined by the TOF distance measuring device, and the distance of the target object is output by the visual sensor.
  • the first image and the second image determine the second distance between the movable platform and the multiple spatial points on the target object, and then determine the target between the movable platform and the target object according to the multiple second distances and the first distance distance. Since the distance measurement result between the TOF distance measuring device and the vision sensor is comprehensively considered, the distance between the movable platform and the object can be accurately measured.
  • FIG. 1 is a schematic structural diagram of a movable platform provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of steps of a method for determining a distance provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a scene in which the TOF distance measuring device and the vision sensor in an embodiment of the present application measure the distance between the target object and the movable platform;
  • FIG. 4 is a schematic flowchart of sub-steps of the distance determination method in FIG. 1;
  • Fig. 5 is a schematic flowchart of sub-steps of the distance determination method in Fig. 1;
  • Fig. 6 is a schematic block diagram of the structure of an unmanned aerial vehicle provided by an embodiment of the present application.
  • This application provides a distance determination method, a movable platform, and a computer-readable storage medium.
  • the distance determination method is applied to a movable platform.
  • the movable platform 100 includes a vision sensor 110 and a time of flight (Time Of Flight).
  • TOF distance measuring device 120 TOF distance measuring device 120 includes a transmitting device for emitting light signals and a receiving device for receiving light signals reflected by the target object.
  • the vision sensor 110 may be a monocular vision device or Binocular vision device
  • the target object is the spot area formed by the light signal emitted by the transmitter of the TOF distance measuring device irradiating the object object
  • the movable platform and the target object (object object) can be measured by the TOF distance measuring device 120
  • the first distance between the two images of the target object output by the vision sensor 110 can determine the second distance between the movable platform and the multiple spatial points on the target object, and the second distance between the first distance and the multiple second The distance can accurately determine the distance between the movable platform and the target object (object object).
  • movable platforms include drones, mobile robots, and pan-tilt vehicles.
  • Unmanned aerial vehicles include rotary-wing drones, such as quad-rotor drones, hexa-rotor drones, and octo-rotor drones. It is a fixed-wing UAV, or a combination of a rotary-wing type and a fixed-wing UAV, which is not limited here.
  • FIG. 2 is a schematic flowchart of steps of a method for determining a distance according to an embodiment of the present application. Specifically, as shown in FIG. 2, the distance determination method includes step S101 to step S104.
  • Step S101 Obtain a first distance between the movable platform and the target object collected by the TOF distance measuring device.
  • the TOF distance measuring device includes a transmitting device for transmitting light signals and a receiving device for receiving the light signals reflected by the target object.
  • the light source of the transmitting device is an infrared light source, and the transmitting device of the TOF distance measuring device transmits the light signal and records the emission At the moment, when the light signal emitted by the transmitter meets the object, it will form a light spot on the surface of the object, thereby obtaining the target object on the object. At the same time, the light signal is emitted through the surface of the object, and the light reflected by the target object is obtained.
  • the receiving device of the TOF distance measuring device can receive the light signal reflected by the target object, and record the receiving time point.
  • the transmission time point and the receiving time point can be calculated to obtain the optical signal between the movable platform and the target object.
  • the flight time, and then the first distance between the movable platform and the target object can be calculated according to the flight time and the speed of light.
  • the target object is the light spot area formed by the light signal emitted by the transmitter of the TOF distance measuring device irradiating the object.
  • Step S102 Acquire a first image and a second image of the target object output by the vision sensor.
  • the first image includes the first target image area
  • the first target image area is the area where the target object is projected on the first image
  • the second image includes the second target image area
  • the second target image area is the target
  • the area where the object is projected on the second image, the position of the center pixel of the first target image area in the first image and the position of the center pixel of the second target image area in the second image are based on the vision sensor and TOF
  • the installation position relationship between the distance measuring devices is determined, and the position of the first target image area in the first image and the position of the second target image area in the second image are based on the installation position between the vision sensor and the TOF distance measuring device
  • the relationship and the first distance are determined. According to the installation position relationship between the vision sensor and the TOF distance measuring device, the rotation matrix and the displacement matrix between the vision sensor and the TOF distance measuring device can be determined.
  • the vision sensor includes any one of a monocular vision device and a binocular vision device. If the vision sensor is a monocular vision device, the first image and the second image are separated by a preset time; if the vision sensor If it is a binocular vision device, the first image is the image output by the first camera in the binocular vision device, and the second image is the image output by the second camera in the binocular vision device.
  • the vision sensor is a monocular vision device
  • the first image and the second image are separated by a preset time; if the vision sensor If it is a binocular vision device, the first image is the image output by the first camera in the binocular vision device, and the second image is the image output by the second camera in the binocular vision device.
  • the rotation matrix and displacement matrix between the TOF distance measuring device and the TOF distance measuring device determine the position point of the center space point of the target object under the angle of view of the first camera; mark the position point in the preset image of the first camera, And determine the three-dimensional position coordinates of the multiple spatial points on the target object according to the first distance; according to the three-dimensional position coordinates of the multiple spatial points, project the multiple spatial points on the target object into the preset image marked with the location point , To obtain the first image containing the first target image area.
  • the internal parameter matrix of the second camera in the binocular vision device and the rotation matrix and displacement matrix between the second camera and the TOF distance measuring device obtain the internal parameter matrix of the second camera in the binocular vision device and the rotation matrix and displacement matrix between the second camera and the TOF distance measuring device; according to the internal parameter matrix of the second camera and the second camera and The rotation matrix and displacement matrix between the TOF distance measuring device determine the position point of the center space point of the target object under the angle of view of the second camera; mark the position point in the preset image of the second camera, and Determine the three-dimensional position coordinates of the multiple spatial points on the target object according to the first distance; according to the three-dimensional position coordinates of the multiple spatial points, project the multiple spatial points on the target object into a preset image marked with the location point, Obtain a second image containing the second target image area.
  • the central spatial point of the target object can be determined in advance according to the angle of view of the first camera. And mark the position point in the preset image of the first camera to obtain the first marked image.
  • the internal parameter matrix of the second camera and the relationship between the second camera and the TOF distance measuring device in advance Between the rotation matrix and the displacement matrix, determine the position of the center space point of the target object under the angle of view of the second camera, and mark the position in the preset image of the second camera to obtain the second marked image , And then store the first mark image and the second mark image in the memory of the movable platform.
  • the first mark image and the second mark image can be directly obtained. Real-time determination is needed to reduce the amount of calculation on the movable platform.
  • the method of determining the three-dimensional position coordinates of the multiple spatial points on the target object according to the first distance is specifically: obtaining the effective field of view of the light source of the TOF distance measuring device, and according to the effective field of view of the light source and The first distance is to determine the three-dimensional position coordinates of the multiple spatial points on the target object relative to the TOF distance measuring device.
  • the effective field of view of the light source of the TOF distance measuring device is determined according to the aperture of the light source emission port of the TOF distance measuring device.
  • the light signal emitted by the light source is emitted to the outside through the light source emission port.
  • the larger the aperture of the light source emission port The larger the effective field angle of the light source and the smaller the aperture of the light source emission port, the smaller the effective field angle of the light source.
  • the effective field angle of the light source is 10°.
  • the light signal emitted by the transmitting device 121 of the TOF distance measuring device 120 irradiates the object Q to form a circular spot area, that is, the target object P, and the receiving device 122 of the TOF distance measuring device 120 receives According to the light signal reflected by the target object, according to the installation position relationship between the first camera 111 and the TOF distance measuring device, the internal parameter matrix and the first distance, the theoretical area P 1 of the target object projected in the first image 10 can be determined, According to the installation position relationship between the second camera 112 and the TOF distance measuring device, the internal parameter matrix, and the first distance, the theoretical area P 2 of the target object projected in the second image 20 can be determined.
  • the shape of the light spot area may be a circle, an ellipse or a rectangle, which is not specifically limited in this application.
  • Step S103 Determine a second distance between the movable platform and the multiple spatial points on the target object according to the first image and the second image.
  • the depth value between the movable platform and the multiple spatial points on the target object can be determined according to the first image and the second image, that is, The second distance between the movable platform and the multiple spatial points on the target object.
  • step S103 specifically includes: sub-steps S1031 to S1032.
  • the first feature point corresponding to the multiple spatial points on the target object is extracted from the first image based on the preset feature point extraction algorithm; the second image is determined to match the first feature point based on the preset feature point tracking algorithm The second feature point of the target object is obtained, and the matching pairs of feature points corresponding to the multiple spatial points on the target object are obtained; or, based on a preset feature point extraction algorithm, the corresponding multiple spatial points on the target object can be extracted from the second image A first feature point; a second feature point that matches the first feature point is determined from the first image based on a preset feature point tracking algorithm, and a feature point matching pair corresponding to a plurality of spatial points on the target object is obtained.
  • the preset feature point extraction algorithm includes at least one of the following: corner detection algorithm (Harris Corner Detection), scale-invariant feature transform (SIFT) algorithm, scale and rotation invariant feature transform (Speeded- Up Robust Features, SURF) algorithm, FAST (Features From Accelerated Segment Test) feature point detection algorithm, preset feature point tracking algorithm includes but not limited to KLT (Kanade-Lucas-Tomasi feature tracker) algorithm.
  • corner detection algorithm Harris Corner Detection
  • SIFT scale-invariant feature transform
  • SURF scale and rotation invariant feature transform
  • FAST Features From Accelerated Segment Test
  • preset feature point tracking algorithm includes but not limited to KLT (Kanade-Lucas-Tomasi feature tracker) algorithm.
  • the first target image area in the first image and the second target image area in the second image of the target object are determined; the target object is determined from the first target image area and the second target image area.
  • the multiple spatial points correspond to the matching pairs of feature points.
  • the position of the first target image area in the first image and the position of the second target image area in the second image are determined according to the installation position relationship between the vision sensor and the TOF distance measuring device and the first distance, and according to the vision sensor
  • the installation position relationship with the TOF distance measuring device can determine the rotation matrix and the displacement matrix between the vision sensor and the TOF distance measuring device.
  • the first target image area in the first image and the second target image area in the second image of the target object can be accurately determined through the installation position relationship between the vision sensor and the TOF distance measuring device and the first distance.
  • the method for determining the first target image area in the first image and the second target image area in the second image of the target object is specifically: determining the three-dimensional position coordinates of the target object according to the first distance, that is, Obtain the effective field of view of the light source of the TOF distance measuring device, and determine the three-dimensional position coordinates of multiple spatial points on the target object relative to the TOF distance measuring device according to the effective field of view of the light source and the first distance; according to the three-dimensional position coordinates , Project the target object into the first image and the second image to determine the first target image area and the second target image area.
  • the first distance determined by the TOF distance measuring device and the effective field of view angle of the light source can determine the three-dimensional position coordinates of multiple spatial points on the target object, and then based on the three-dimensional position coordinates, the target object can be projected onto the first image and the second image. In the image, the first target image area and the second target image area can be accurately identified.
  • the method of projecting the target object into the first image to determine the first target image area according to the three-dimensional position coordinates of the target object is specifically: acquiring the binocular vision device The internal parameter matrix of the first camera in and the rotation matrix and displacement matrix between the first camera and the TOF distance measuring device; according to the internal parameter matrix of the first camera, the rotation between the first camera and the TOF distance measuring device Matrix and displacement matrix, and the three-dimensional position coordinates of multiple spatial points on the target object, determine the two-dimensional position coordinates of multiple spatial points on the target object in the first camera coordinate system; according to the multiple spatial points on the target object The two-dimensional position coordinates in the first camera coordinate system, the pixel points corresponding to multiple spatial points are marked in the first image, and the area where the circumscribed circle formed by the pixel points corresponding to the multiple spatial points is the first target image area.
  • the method of projecting the target object into the second image to determine the second target image area according to the three-dimensional position coordinates of the target object is specifically: acquiring the second target image area in the binocular vision device The internal parameter matrix of the second camera and the rotation matrix and displacement matrix between the second camera and the TOF distance measuring device; according to the internal parameter matrix of the second camera, the rotation matrix and the displacement between the second camera and the TOF distance measuring device Matrix and the three-dimensional position coordinates of multiple spatial points on the target object, determine the two-dimensional position coordinates of multiple spatial points on the target object in the second camera coordinate system; according to the multiple spatial points on the target object, the second camera In the two-dimensional position coordinates in the coordinate system, the pixel points corresponding to multiple spatial points are marked in the second image, and the area where the circumscribed circle formed by the pixel points corresponding to the multiple marked spatial points is located is the second target image area.
  • the second distance between the movable platform and the multiple spatial points on the target object can be determined.
  • the following uses a visual sensor as a binocular vision device as an example to explain the process of determining the second distance between the movable platform and the multiple spatial points based on multiple feature point matching pairs.
  • each feature point matching pair determines the corresponding pixel difference of each feature point matching pair; obtain the preset focal length and preset binocular distance of the binocular vision device; The focal length, the preset binocular distance, and the pixel difference corresponding to each feature point matching pair are set to determine the second distance between the movable platform and the multiple spatial points.
  • the preset focal length is determined by calibrating the focal length of the binocular vision device, and the preset binocular distance is determined according to the installation positions of the first camera and the second camera in the binocular vision device.
  • Step S104 Determine a target distance between the movable platform and the target object according to a plurality of the second distances and the first distance.
  • the target distance between the movable platform and the target object may be determined through the first distance and the multiple second distances. Since the distance measurement result between the TOF distance measuring device and the vision sensor is comprehensively considered, the distance between the movable platform and the object can be accurately measured.
  • step S104 specifically includes: sub-steps S1041 to S1042.
  • the credibility index of the first distance may be determined by the first distance and the plurality of second distances.
  • the credibility index is used to characterize the accuracy of the first distance between the movable platform and the target object collected by the TOF distance measuring device. The greater the credibility index, the greater the accuracy of the measured first distance. High, and the smaller the credibility index, the lower the accuracy of the measured first distance.
  • the target space point is determined from the plurality of space points according to the plurality of second distances and the first distance, wherein the difference between the second distance corresponding to the target space point and the first distance is less than or equal to The preset threshold; the credibility index of the first distance is determined according to the target space point.
  • the preset threshold may be set based on actual conditions, which is not specifically limited in this application, for example, the preset threshold is 0.5 meters.
  • the method of determining the credibility index of the first distance according to the target space point is specifically: determining the credibility index of the first distance according to the number of target space points and the number of space points, that is, counting the number of space points The space point whose difference between the second distance and the first distance is less than or equal to the preset threshold, that is, the first number of target space points, and the second number of space points are counted, and then the first number accounts for the second number Percentage, and the percentage of the first quantity to the second quantity is used as the credibility index of the first distance. For example, assuming that the number of target space points is 75 and the number of space points is 100, the percentage of the first number to the second number is 75%, so the credibility index of the first distance is 75%.
  • the method of determining the credibility index of the first distance according to the target space point is specifically: determining the first weight value according to the pixel coordinates and pixel value of the corresponding feature point of the target space point in the first image; The pixel coordinates and pixel values of the corresponding feature points in the first image or the second image are determined to determine the second weight value; the credibility index of the first distance is determined according to the first weight value and the second weight value, that is, the first distance is calculated.
  • the weight value accounts for the percentage of the second weight value, and the first weight value accounts for the percentage of the second weight value as the credibility index of the first distance. For example, if the first weight value is 0.5 and the second weight value is 0.7, the first weight value accounts for 71.4% of the second weight value. Therefore, the credibility index of the first distance is 71.4%.
  • the first weight value is determined in a specific manner: according to the pixel coordinates and pixel values of the corresponding feature points of each target space point in the first image, determine each target space Point weight value, and accumulate the weight value of each target space point to obtain the first weight value; similarly, the method for determining the second weight value is specifically as follows: according to each spatial point in the first image or the second image Corresponding to the pixel coordinates and pixel values of the feature points, the weight value of each spatial point is determined, and the weight value of each spatial point is accumulated to obtain the second weight value.
  • the method of determining the first weight value is specifically: determining the first weight value of the target object in the first image.
  • a target image area or a second target image area in the second image; the first weight value is determined according to the pixel coordinates and pixel values of the corresponding feature points of the target space point in the first target image area or the second target image area .
  • the pixel coordinates and pixel values of determine the second weight value.
  • Using the feature points in the first target image area or the second target image area to participate in the calculation of the first weight value or the second weight value can reduce the amount of calculation and increase the processing speed.
  • the method for determining the weight value of the target space point or the space point is specifically: substituting the pixel coordinates and pixel values of the corresponding feature point of the target space point or the space point in the first image or the second image into the weight value Calculate the formula to get the weight value of the target space point or space point.
  • the weight value calculation formula is I is the pixel value of the target spatial point or the corresponding feature point in the first image or the second image
  • (u x , v x ) is the corresponding target spatial point or the spatial point in the first image or the second image
  • the pixel coordinates of the feature point, (u 0 , v 0 ) are the pixel coordinates of the corresponding feature point of the center space point of the target object in the first image or the second image
  • e is a natural number
  • is a preset coefficient
  • is based on experience Sure.
  • the credibility index of the first distance can be calculated according to the following formula:
  • x i is the target space point in the set P
  • x j is the space point in the set P′
  • I i is the i-th target space point in the set P in the first image or the second image
  • the pixel value of the corresponding feature point, m is the number of target space points in the set P
  • I j is the j-th spatial point in the set P′ in the first image or the second image
  • the pixel value of the corresponding feature point, n is the number of spatial points in the set P′, and n is greater than m
  • (u 0 , v 0 ) is the correspondence of the central spatial point of the target object in the first image or the second image
  • the first weight value can be calculated by The second weight value can be calculated.
  • the credibility index of the first distance After the credibility index of the first distance is determined, it is determined whether the credibility index is greater than the preset credibility index; if the credibility index is greater than the preset credibility index, the multiple second distances and the first distance are fused To determine the target distance between the movable platform and the target object.
  • the preset credibility index can be set according to actual conditions, which is not specifically limited in this application, for example, the preset credibility index is 75%.
  • the method of performing fusion processing on the multiple second distances and the first distance is specifically: determining the third distance according to the multiple second distances, and obtaining the first weight coefficient and the third distance of the first distance
  • the second weight coefficient of the calculate the product of the first distance and the first weight coefficient and calculate the product of the third distance and the second weight coefficient, and the product of the first distance and the first weight coefficient and the third distance and the second weight
  • the product of the coefficients is accumulated to obtain the target distance between the movable platform and the target object.
  • the sum of the first weight coefficient and the second weight coefficient is 1, and the first weight coefficient and the second weight coefficient can be set according to actual conditions, for example, the first weight coefficient is 0.5 and the second weight coefficient is 0.5.
  • the method for determining the third distance according to the plurality of second distances is specifically: selecting any second distance from the plurality of second distances as the third distance, or calculating the first distance based on the plurality of second distances.
  • the average value of the second distance, and the average value of the second distance as the third distance or select the smallest second distance and the largest second distance from multiple second distances, and determine the smallest second distance and the largest second distance
  • the average value of the second distance, the average value of the smallest second distance and the largest second distance is taken as the third distance.
  • the target distance between the movable platform and the target object is determined according to a plurality of second distances, that is, any one of the plurality of second distances is selected.
  • the second distance is used as the target distance between the movable platform and the target object, or the average value of the second distance is calculated based on multiple second distances, and the average value of the second distance is used as the distance between the movable platform and the target object.
  • the target distance or select the smallest second distance and the largest second distance from multiple second distances, and determine the average value of the smallest second distance and the largest second distance, and combine the smallest second distance and the largest second distance
  • the average value of the second distance is used as the target distance between the movable platform and the target object.
  • the first distance between the movable platform and the target object is determined by the TOF distance measuring device, and the first image and the second image of the target object output by the vision sensor are used to determine the distance between the movable platform and the target object.
  • the second distance between the multiple spatial points on the target object, and then the target distance between the movable platform and the target object is determined according to the multiple second distances and the first distance. Since the distance measurement result between the TOF distance measuring device and the vision sensor is comprehensively considered, the distance between the movable platform and the object can be accurately measured.
  • FIG. 6 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the mobile platform 200 includes a processor 201, a memory 202, a vision sensor 203, and a TOF distance measuring device 204.
  • the processor 201, the memory 202, the vision sensor 203 and the TOF distance measuring device 204 are connected by a bus 205.
  • the bus 205 is, for example, I2C. (Inter-integrated Circuit) bus.
  • the mobile platform 200 includes drones, mobile robots, pan-tilt vehicles, etc.
  • the drones can be rotary-wing drones, such as quad-rotor drones, hexa-rotor drones, and octo-rotor drones. It can also be a fixed-wing UAV, or a combination of a rotary-wing type and a fixed-wing UAV, which is not limited here.
  • the processor 201 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 202 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the TOF distance measuring device 204 includes a transmitting device for transmitting light signals and a receiving device for receiving light signals reflected by the target object.
  • the processor 201 is configured to run a computer program stored in the memory 202, and implement the following steps when the computer program is executed:
  • the target distance between the movable platform and the target object is determined according to the plurality of second distances and the first distance.
  • the processor 201 realizes that the target distance between the movable platform and the target object is determined according to a plurality of the second distances and the first distance, it is used to realize:
  • the target distance between the movable platform and the target object is determined according to the credibility index, a plurality of the second distances, and the first distance.
  • the processor 201 when the processor 201 implements the determination of the credibility index of the first distance according to a plurality of the second distances and the first distance, it is used to implement:
  • a target space point is determined from the plurality of space points, wherein the second distance corresponding to the target space point is between the first distance
  • the difference of is less than or equal to the preset threshold
  • the credibility index of the first distance is determined according to the target space point.
  • the processor 201 when the processor 201 implements the determination of the credibility index of the first distance according to the target spatial point, it is used to implement:
  • the credibility index of the first distance is determined according to the number of the target space points and the number of the space points.
  • the processor 201 when the processor 201 implements the determination of the credibility index of the first distance according to the target spatial point, it is used to implement:
  • the credibility index of the first distance is determined according to the first weight value and the second weight value.
  • the processor 201 determines the first weight value according to the pixel coordinates and pixel values of the corresponding feature points of the target space point in the first image or the second image, it is used to achieve:
  • the first weight value is determined according to the pixel coordinates and pixel values of the corresponding feature points of the target space point in the first target image area or the second target image area.
  • the processor 201 when the processor 201 is configured to determine the target distance between the movable platform and the target object according to the credibility index, a plurality of the second distances, and the first distance, accomplish:
  • fusion processing is performed on a plurality of the second distances and the first distances to determine the target distance between the movable platform and the target object.
  • the processor 201 determines whether the credibility index is greater than a preset credibility index, it is further used to implement:
  • the target distance between the movable platform and the target object is determined according to a plurality of the second distances.
  • the processor 201 determines the second distance between the movable platform and the multiple spatial points on the target object according to the first image and the second image, it is used to achieve:
  • a second distance between the movable platform and the plurality of spatial points is determined.
  • the processor 201 when the processor 201 implements the determination of the feature point matching pairs corresponding to the multiple spatial points on the target object from the first image and the second image, it is used to implement:
  • the feature point matching pairs corresponding to the multiple spatial points on the target object are determined.
  • the position of the first target image area in the first image and the position of the second target image area in the second image are based on the difference between the vision sensor and the TOF distance measuring device.
  • the installation position relationship between the two and the first distance is determined.
  • the processor 201 when the processor 201 implements the determination of the first target image area in the first image and the second target image area in the second image of the target object, it is used to implement:
  • the target object is projected into the first image and the second image to determine the first target image area and the second target image area.
  • the vision sensor 203 includes any one of a monocular vision device and a binocular vision device.
  • the vision sensor 203 is a monocular vision device, the first image and the second image are separated by a preset time;
  • the vision sensor is a binocular vision device
  • the first image is the image output by the first camera in the binocular vision device
  • the second image is the second image in the binocular vision device. The image output by the camera.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the distance determination method provided in the example.
  • the computer-readable storage medium may be the internal storage unit of the removable platform described in any of the foregoing embodiments, such as the hard disk or memory of the removable platform.
  • the computer-readable storage medium may also be an external storage device of the removable platform, such as a plug-in hard disk equipped on the removable platform, a smart memory card (Smart Media Card, SMC), and Secure Digital (Secure Digital). , SD) card, flash card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种距离确定方法、控制终端、无人飞行器及存储介质,该方法包括:获取TOF测距装置采集到的第一距离(S101);获取视觉传感器输出的第一图像和第二图像(S102);根据第一图像和第二图像确定多个第二距离(S103);根据多个第二距离和第一距离确定目标距离(S104)。上述方法能够准确的测量可移动平台与物体之间的距离。

Description

距离确定方法、可移动平台及计算机可读存储介质 技术领域
本申请涉及距离测量的技术领域,尤其涉及一种距离确定方法、可移动平台及计算机可读存储介质。
背景技术
目前,可移动平台装载有飞行时间(Time Of Flight,TOF)测距装置,TOF测距装置可从发射器向对象发射光脉冲,接收器则可通过计算光脉冲从发射器到对象,再以像素格式返回到接收器的运行时间来确定可移动平台与目标对象之间的距离。在大部分场景下,TOF测距装置测量的距离较为精准,但在有高反射物体,如路牌的场景下,TOF测距装置会出现周期混叠的情况,导致TOF测距装置测量的距离不准确,容易给可移动平台带来一些影响,例如,可移动平台通过TOF测距装置测量的距离,进行避障,在TOF测距装置测量的距离不准确的情况下,会导致可移动平台无法准确的避障,无法保证可移动平台的安全性。因此,如何准确的测量可移动平台与物体之间的距离是目前亟待解决的问题。
发明内容
基于此,本申请提供了一种距离确定方法、可移动平台及计算机可读存储介质,旨在准确的测量可移动平台与物体之间的距离。
第一方面,本申请提供了一种距离确定方法,应用于可移动平台,其中,所述可移动平台包括视觉传感器和TOF测距装置,所述TOF测距装置包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置,所述方法包括:
获取所述TOF测距装置采集到的所述可移动平台与所述目标对象之间的第一距离;
获取所述视觉传感器输出的所述目标对象的第一图像和第二图像;
根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离;
根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对 象之间的目标距离。
第二方面,本申请还提供了一种可移动平台,所述可移动平台包括视觉传感器、TOF测距装置、存储器和处理器,所述处理器与所述视觉传感器和TOF测距装置连接;
所述TOF测距装置包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
获取所述TOF测距装置采集到的所述可移动平台与所述目标对象之间的第一距离;
获取所述视觉传感器输出的所述目标对象的第一图像和第二图像;
根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离;
根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
第三方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如本申请说明书提供的任一种距离确定方法的步骤。
本申请实施例提供了一种距离确定方法、可移动平台及计算机可读存储介质,通过TOF测距装置确定可移动平台与目标对象之间的第一距离,并通过视觉传感器输出的目标对象的第一图像和第二图像,确定可移动平台与目标对象上的多个空间点之间的第二距离,然后根据多个第二距离和第一距离确定可移动平台与目标对象之间的目标距离。由于综合考虑了TOF测距装置与视觉传感器的距离测量结果,因此可以准确的测量可移动平台与物体之间的距离。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以 根据这些附图获得其他的附图。
图1是本申请实施例提供的可移动平台的一结构示意图;
图2是本申请一实施例提供的一种距离确定方法的步骤示意流程图;
图3是本申请实施例中TOF测距装置和视觉传感器测量目标对象与可移动平台之间的距离的一场景示意图;
图4是图1中的距离确定方法的子步骤示意流程图;
图5是图1中的距离确定方法的子步骤示意流程图;
图6是本申请一实施例提供的一种无人飞行器的结构示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本申请提供一种距离确定方法、可移动平台及计算机可读存储介质,该距离确定方法应用于可移动平台,如图1所示,可移动平台100包括视觉传感器110和飞行时间(Time Of Flight,TOF)测距装置120,TOF测距装置120包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置,视觉传感器110可以是单目视觉装置,也可以是双目视觉装置,该目标对象为TOF测距装置的发射装置发射的光信号照射到物体对象上所形成的光斑区域,通过TOF测距装置120可以测量得到可移动平台与目标对象(物体对象)之间的第一距离,通过视觉传感器110输出的目标对象的两个图像,可以确定可移动平台与目标对象上的多个空间点之间的第二距离,通过第一距离和多个第二距离能够准确的确定可移动平台与目标对象(物体对象)之间的距离。
其中,可移动平台包括无人机、可移动机器人和云台车等,无人机包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。
请参阅图2,图2是本申请一实施例提供的一种距离确定方法的步骤示意流程图。具体地,如图2所示,该距离确定方法包括步骤S101至步骤S104。
步骤S101、获取所述TOF测距装置采集到的所述可移动平台与所述目标对象之间的第一距离。
TOF测距装置包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置,发射装置的光源为红外光源,TOF测距装置的发射装置发射光信号,并记录发射时刻点,当发射装置发射的光信号遇到物体对象时,会在物体对象的表面形成光斑,从而得到物体对象上的目标对象,同时光信号经过物体对象的表面发射,得到由目标对象反射的光信号,通过TOF测距装置的接收装置能够接收由目标对象反射的光信号,并记录接收时刻点,通过发射时刻点和接收时刻点可以计算得到光信号在可移动平台与目标对象之间的飞行时间,再根据该飞行时间和光速即可计算得到可移动平台与目标对象之间的第一距离。其中,目标对象为TOF测距装置的发射装置发射的光信号照射到物体对象上所形成的光斑区域。
步骤S102、获取所述视觉传感器输出的所述目标对象的第一图像和第二图像。
其中,第一图像包括第一目标图像区域,第一目标图像区域为该目标对象在第一图像上的投影所在的区域,第二图像包括第二目标图像区域,第二目标图像区域为该目标对象在第二图像上的投影所在的区域,第一目标图像区域的中心像素点在第一图像中的位置和第二目标图像区域的中心像素点在第二图像中的位置根据视觉传感器和TOF测距装置之间的安装位置关系确定,而第一目标图像区域在第一图像中的位置和第二目标图像区域在第二图像中的位置根据视觉传感器和TOF测距装置之间的安装位置关系和第一距离确定,根据视觉传感器和TOF测距装置之间的安装位置关系,可以确定视觉传感器与TOF测距装置之间的旋转矩阵和位移矩阵。
在一些实施方式中,该视觉传感器包括单目视觉装置和双目视觉装置中的任一项,若视觉传感器为单目视觉装置,则第一图像和第二图像间隔预设时间;若视觉传感器为双目视觉装置,则第一图像为双目视觉装置中的第一拍摄装置输出的图像,第二图像为双目视觉装置中的第二拍摄装置输出的图像。以下以视觉传感器为双目视觉装置为例进行解释说明。
示例性的,获取双目视觉装置中的第一拍摄装置的内参矩阵以及第一拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵;根据第一拍摄装置的内参矩 阵以及第一拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵,确定目标对象的中心空间点在第一拍摄装置的视场角下的位置点;在第一拍摄装置的预设图像中标记该位置点,并根据第一距离确定目标对象上的多个空间点的三维位置坐标;根据多个空间点的三维位置坐标,将目标对象上的多个空间点投影到标记有该位置点的预设图像中,得到包含第一目标图像区域的第一图像。
类似的,获取双目视觉装置中的第二拍摄装置的内参矩阵以及第二拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵;根据第二拍摄装置的内参矩阵以及第二拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵,确定目标对象的中心空间点在第二拍摄装置的视场角下的位置点;在第二拍摄装置的预设图像中标记该位置点,并根据第一距离确定目标对象上的多个空间点的三维位置坐标;根据多个空间点的三维位置坐标,将目标对象上的多个空间点投影到标记有该位置点的预设图像中,得到包含第二目标图像区域的第二图像。
可以理解的是,可以提前根据第一拍摄装置的内参矩阵以及第一拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵,确定目标对象的中心空间点在第一拍摄装置的视场角下的位置点,并在第一拍摄装置的预设图像中标记该位置点,得到第一标记图像,类似的,提前根据第二拍摄装置的内参矩阵以及第二拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵,确定目标对象的中心空间点在第二拍摄装置的视场角下的位置点,并在第二拍摄装置的预设图像中标记该位置点,得到第二标记图像,然后将第一标记图像和第二标记图像存储在可移动平台的存储器中,在后续测量目标对象与可移动平台之间的距离时,可以直接获取第一标记图像和第二标记图像,不需要实时的确定,减少可移动平台的计算量。
在一些实施方式中,根据第一距离确定目标对象上的多个空间点的三维位置坐标的方式具体为:获取TOF测距装置的光源的有效视场角,并根据光源的有效视场角和第一距离,确定目标对象上的多个空间点相对于TOF测距装置的三维位置坐标。其中,TOF测距装置的光源的有效视场角是根据TOF测距装置的光源发射口的口径确定,光源发射的光信号通过该光源发射口向外部发射,光源发射口的口径越大,则光源的有效视场角越大,而光源发射口的口径越小,则光源的有效视场角越小,例如,光源的有效视场角为10°。
如图图1和3所示,TOF测距装置120的发射装置121发射的光信号照射到物体对象Q上形成圆形的光斑区域,即目标对象P,TOF测距装置120的接收装置122接收由目标对象反射的光信号,根据第一拍摄装置111与TOF测距 装置之间的安装位置关系、内参矩阵以及第一距离,可以确定目标对象投影在第一图像10中的理论区域P 1,根据第二拍摄装置112与TOF测距装置之间的安装位置关系、内参矩阵以及第一距离,可以确定目标对象投影在第二图像20中的理论区域P 2。可以理解的是,光斑区域的形状可以是圆形、椭圆形或者矩形,本申请对此不作具体限定。
步骤S103、根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离。
在获取到视觉传感器输出的目标对象的第一图像和第二图像之后,可以根据第一图像和第二图像,确定可移动平台与目标对象上的多个空间点之间的深度值,也即可移动平台与目标对象上的多个空间点之间的第二距离。
在一实施例中,如图4所示,步骤S103具体包括:子步骤S1031至S1032。
S1031、从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对。
具体地,基于预设特征点提取算法从第一图像中提取目标对象上的多个空间点对应的第一特征点;基于预设特征点跟踪算法从第二图像中确定与第一特征点匹配的第二特征点,得到目标对象上的多个空间点分别对应的特征点匹配对;或者,也可以基于预设特征点提取算法从第二图像中提取目标对象上的多个空间点对应的第一特征点;基于预设特征点跟踪算法从第一图像中确定与第一特征点匹配的第二特征点,得到目标对象上的多个空间点分别对应的特征点匹配对。
其中,预设特征点提取算法包括如下至少一种:角点检测算法(Harris Corner Detection)、尺度不变特征变换(Scale-invariant feature transform,SIFT)算法、尺度和旋转不变特征变换(Speeded-Up Robust Features,SURF)算法、FAST(Features From Accelerated Segment Test)特征点检测算法,预设特征点跟踪算法包括但不限于KLT(Kanade–Lucas–Tomasi feature tracker)算法。
在一些实施方式中,确定目标对象在第一图像中的第一目标图像区域和在第二图像中的第二目标图像区域;从第一目标图像区域和第二目标图像区域中确定目标对象上的多个空间点分别对应的特征点匹配对。通过先确定目标对象在第一图像中的第一目标图像区域和在第二图像中的第二目标图像区域,再从第一目标图像区域和第二目标图像区域中确定目标对象上的多个空间点分别对应的特征点匹配对,可以减少计算量,提高处理速度。
其中,第一目标图像区域在第一图像中的位置和第二目标图像区域在第二 图像中的位置根据视觉传感器和TOF测距装置之间的安装位置关系和第一距离确定,根据视觉传感器和TOF测距装置之间的安装位置关系,可以确定视觉传感器与TOF测距装置之间的旋转矩阵和位移矩阵。通过视觉传感器和TOF测距装置之间的安装位置关系和第一距离可以准确的确定目标对象在第一图像中的第一目标图像区域和在第二图像中的第二目标图像区域。
在一些实施方式中,确定目标对象在第一图像中的第一目标图像区域和在第二图像中的第二目标图像区域的方式具体为:根据第一距离确定目标对象的三维位置坐标,即获取TOF测距装置的光源的有效视场角,并根据光源的有效视场角和第一距离,确定目标对象上的多个空间点相对于TOF测距装置的三维位置坐标;根据三维位置坐标,将目标对象投影到第一图像和第二图像中以确定第一目标图像区域和第二目标图像区域。通过TOF测距装置确定的第一距离和光源的有效视场角可以确定目标对象上的多个空间点的三维位置坐标,再基于三维位置坐标,可以将目标对象投影到第一图像和第二图像中,从而可以准确的第一目标图像区域和第二目标图像区域。
在一些实施方式中,若视觉传感器为双目视觉装置,则根据目标对象的三维位置坐标,将目标对象投影到第一图像中以确定第一目标图像区域的方式具体为:获取双目视觉装置中的第一拍摄装置的内参矩阵以及第一拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵;根据第一拍摄装置的内参矩阵、第一拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵以及目标对象上的多个空间点的三维位置坐标,确定目标对象上的多个空间点在第一相机坐标系下的二维位置坐标;根据目标对象上的多个空间点在第一相机坐标系下的二维位置坐标,在第一图像中标记多个空间点对应的像素点,标记的多个空间点对应的像素点合围形成的外接圆所在的区域为第一目标图像区域。
类似的,若视觉传感器为双目视觉装置,则根据目标对象的三维位置坐标,将目标对象投影到第二图像中以确定第二目标图像区域的方式具体为:获取双目视觉装置中的第二拍摄装置的内参矩阵以及第二拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵;根据第二拍摄装置的内参矩阵、第二拍摄装置与TOF测距装置之间的旋转矩阵和位移矩阵以及目标对象上的多个空间点的三维位置坐标,确定目标对象上的多个空间点在第二相机坐标系下的二维位置坐标;根据目标对象上的多个空间点在第二相机坐标系下的二维位置坐标,在第二图像中标记多个空间点对应的像素点,标记的多个空间点对应的像素点合围形成的外接圆所在的区域为第二目标图像区域。
S1032、根据多个所述特征点匹配对,确定所述可移动平台与所述多个空间点之间的第二距离。
通过目标对象上的多个空间点分别对应的特征点匹配对,可以确定可移动平台与目标对象上的多个空间点之间的第二距离。以下以视觉传感器为双目视觉装置为例解释说明基于多个特征点匹配对,确定可移动平台与多个空间点之间的第二距离的过程。
具体地,根据每个特征点匹配对内的两个特征点的像素,确定每个特征点匹配对各自对应的像素差;获取双目视觉装置的预设焦距和预设双目距离;根据预设焦距、预设双目距离以及每个特征点匹配对各自对应的像素差,确定可移动平台与多个空间点之间的第二距离。其中,预设焦距为通过标定双目视觉装置的焦距确定的,预设双目距离是根据双目视觉装置中第一拍摄装置与第二拍摄装置的安装位置确定的。
步骤S104、根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
在确定第一距离以及多个第二距离之后,可以通过第一距离和多个第二距离确定可移动平台与目标对象之间的目标距离。由于综合考虑了TOF测距装置与视觉传感器的距离测量结果,因此可以准确的测量可移动平台与物体之间的距离。
在一实施方式中,如图5所示,步骤S104具体包括:子步骤S1041至S1042。
S1041、根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数。
在确定第一距离以及多个第二距离之后,可以通过第一距离和多个第二距离确定第一距离的可信指数。其中,可信指数用于表征通过TOF测距装置采集到的可移动平台与目标对象之间的第一距离的准确度,可信指数越大,则表示测量得到的第一距离的准确度越高,而可信指数越小,则表示测量得到的第一距离的准确度越低。
在一些实施方式中,根据多个第二距离和第一距离,从多个空间点中确定目标空间点,其中,目标空间点对应的第二距离与第一距离之间的差值小于或等于预设阈值;根据目标空间点确定第一距离的可信指数。其中,预设阈值可以基于实际情况进行设置,本申请对此不作具体限定,例如,预设阈值为0.5米。通过从多个空间点中获取第二距离与第一距离之间的差值小于或等于预设阈值的空间点作为目标空间点,并基于目标空间点可以准确且快速的确定第一 距离的可信指数。
在一些实施方式中,根据目标空间点确定第一距离的可信指数的方式具体为:根据目标空间点的数量和空间点的数量确定第一距离的可信指数,即统计多个空间点中第二距离与第一距离之间的差值小于或等于预设阈值的空间点,即目标空间点的第一数量,并统计空间点的第二数量,然后计算第一数量占第二数量的百分比,并将第一数量占第二数量的百分比作为第一距离的可信指数。例如,设目标空间点的数量为75个,空间点的数量为100个,则第一数量占第二数量的百分比为75%,因此第一距离的可信指数为75%。
在一些实施方式中,根据目标空间点确定第一距离的可信指数的方式具体为:根据目标空间点在第一图像的对应特征点的像素坐标和像素值,确定第一权重值;根据空间点在第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第二权重值;根据第一权重值和第二权重值确定第一距离的可信指数,即计算第一权重值占第二权重值的百分比,并将第一权重值占第二权重值的百分比作为第一距离的可信指数。例如,设第一权重值为0.5,第二权重值为0.7,则第一权重值占第二权重值的百分比71.4%,因此,第一距离的可信指数为71.4%。
可以理解的是,若目标空间点为多个,则第一权重值的确定方式具体为:根据每个目标空间点在第一图像的对应特征点的像素坐标和像素值,确定每个目标空间点的权重值,并累加每个目标空间点的权重值,得到第一权重值;类似的,第二权重值的确定方式具体为:根据每个空间点在第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定每个空间点的权重值,并累加每个空间点的权重值,得到第二权重值。
在一些实施方式中,根据目标空间点在第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值的方式具体为:确定目标对象在第一图像中的第一目标图像区域或者在第二图像中的第二目标图像区域;根据目标空间点在第一目标图像区域或者第二目标图像区域中的对应特征点的像素坐标和像素值,确定第一权重值。类似的,确定目标对象在第一图像中的第一目标图像区域或者在第二图像中的第二目标图像区域;根据空间点在第一目标图像区域或者第二目标图像区域中的对应特征点的像素坐标和像素值,确定第二权重值。使用第一目标图像区域或者第二目标图像区域中的特征点参与第一权重值或者第二权重值的计算,可以减少计算量,提高处理速度。
在一些实施方式中,目标空间点或者空间点的权重值的确定方式具体为: 将目标空间点或者空间点在第一图像或者第二图像中的对应特征点的像素坐标和像素值代入权重值计算公式,得到目标空间点或者空间点的权重值。
其中,权重值计算公式为
Figure PCTCN2020082199-appb-000001
I为目标空间点或者空间点在第一图像或者第二图像中的对应特征点的像素值,(u x,v x)为目标空间点或者空间点在第一图像或者第二图像中的对应特征点的像素坐标,(u 0,v 0)为目标对象的中心空间点在第一图像或者第二图像中的对应特征点的像素坐标,e是自然数,σ是预设系数,σ根据经验确定。
示例性的,记包含目标空间点的集合为P,记包括空间点的集合为P′,则第一距离的可信指数可以根据如下公式计算得到:
Figure PCTCN2020082199-appb-000002
其中,x i为集合P中的目标空间点,x j为集合P′中的空间点,
Figure PCTCN2020082199-appb-000003
为集合P中的第i个目标空间点在第一图像或者第二图像中的对应特征点的像素坐标,I i为集合P中的第i个目标空间点在第一图像或者第二图像中的对应特征点的像素值,m为集合P中的目标空间点的数量,
Figure PCTCN2020082199-appb-000004
为集合P′中的第j个空间点在第一图像或者第二图像中的对应特征点的像素坐标,I j为集合P′中的第j个空间点在第一图像或者第二图像中的对应特征点的像素值,n为集合P′中的空间点的数量,且n大于m,(u 0,v 0)为目标对象的中心空间点在第一图像或者第二图像中的对应特征点的像素坐标,e是自然数,σ是预设系数,σ根据经验确定。
可以理解的是,通过
Figure PCTCN2020082199-appb-000005
可以计算得到第一权重值,通过
Figure PCTCN2020082199-appb-000006
可以计算得到第二权重值。
S1042、根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
在确定第一距离的可信指数后,确定该可信指数是否大于预设可信指数;若该可信指数大于预设可信指数,则对多个第二距离和第一距离进行融合处理,以确定可移动平台与目标对象之间的目标距离。其中,预设可信指数可以根据实际情况进行设置,本申请对此不作具体限定,例如,预设可信指数为75%。通过对多个第二距离和第一距离进行融合处理来确定可移动平台与目标对象之间的目标距离,可以极大的提高可移动平台与物体之间的距离的准确性。
在一些实施方式中,对多个第二距离和第一距离进行融合处理的方式具体为:根据多个第二距离,确定第三距离,并获取第一距离的第一权重系数和第三距离的第二权重系数;计算第一距离与第一权重系数的乘积以及计算第三距离与第二权重系数的乘积,并将第一距离与第一权重系数的乘积和第三距离与第二权重系数的乘积进行累加,得到可移动平台与目标对象之间的目标距离。其中,第一权重系数与第二权重系数之和为1,第一权重系数与第二权重系数可以根据实际情况进行设置,例如,第一权重系数为0.5,第二权重系数为0.5。
在一些实施方式中,根据多个第二距离,确定第三距离的方式具体为:从多个第二距离中选择任一个第二距离作为第三距离,或者根据多个第二距离,计算第二距离的平均值,并将第二距离的平均值作为第三距离,或者从多个第二距离中选择最小的第二距离和最大的第二距离,并确定最小的第二距离和最大的第二距离的平均值,将最小的第二距离和最大的第二距离的平均值作为第三距离。
在一些实施方式中,若可信指数小于或等于预设可信指数,则根据多个第二距离确定可移动平台与目标对象之间的目标距离,即从多个第二距离中选择任一个第二距离作为可移动平台与目标对象之间的目标距离,或者根据多个第 二距离,计算第二距离的平均值,并将第二距离的平均值作为可移动平台与目标对象之间的目标距离,或者从多个第二距离中选择最小的第二距离和最大的第二距离,并确定最小的第二距离和最大的第二距离的平均值,将最小的第二距离和最大的第二距离的平均值作为可移动平台与目标对象之间的目标距离。
本申请说明书提供的距离确定方法,通过TOF测距装置确定可移动平台与目标对象之间的第一距离,并通过视觉传感器输出的目标对象的第一图像和第二图像,确定可移动平台与目标对象上的多个空间点之间的第二距离,然后根据多个第二距离和第一距离确定可移动平台与目标对象之间的目标距离。由于综合考虑了TOF测距装置与视觉传感器的距离测量结果,因此可以准确的测量可移动平台与物体之间的距离。
请参阅图6,图6是本申请一实施例提供的可移动平台的示意性框图。该可移动平台200包括处理器201、存储器202、视觉传感器203和TOF测距装置204,处理器201、存储器202、视觉传感器203和TOF测距装置204通过总线205连接,该总线205比如为I2C(Inter-integrated Circuit)总线。其中,可移动平台200包括无人机、可移动机器人和云台车等,无人机可以为旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。
具体地,处理器201可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器202可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
具体地,TOF测距装置204包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置。
其中,所述处理器201用于运行存储在存储器202中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取所述TOF测距装置采集到的所述可移动平台与所述目标对象之间的第一距离;
获取所述视觉传感器输出的所述目标对象的第一图像和第二图像;
根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离;
根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对 象之间的目标距离。
可选地,所述处理器201实现根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离时,用于实现:
根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数;
根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
可选地,所述处理器201实现根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数时,用于实现:
根据多个所述第二距离和所述第一距离,从所述多个空间点中确定目标空间点,其中,所述目标空间点对应的所述第二距离与所述第一距离之间的差值小于或等于预设阈值;
根据所述目标空间点确定所述第一距离的可信指数。
可选地,所述处理器201实现根据所述目标空间点确定所述第一距离的可信指数时,用于实现:
根据所述目标空间点的数量和所述空间点的数量确定所述第一距离的可信指数。
可选地,所述处理器201实现根据所述目标空间点确定所述第一距离的可信指数时,用于实现:
根据所述目标空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值;
根据所述空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第二权重值;
根据所述第一权重值和所述第二权重值确定第一距离的可信指数。
可选地,所述处理器201实现根据所述目标空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值时,用于实现:
确定所述目标对象在所述第一图像中的第一目标图像区域或者在所述第二图像中的第二目标图像区域;
根据所述目标空间点在所述第一目标图像区域或者第二目标图像区域中的对应特征点的像素坐标和像素值,确定第一权重值。
可选地,所述处理器201实现根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离时,用于实现:
确定所述可信指数是否大于预设可信指数;
若所述可信指数大于预设可信指数,则对多个所述第二距离和所述第一距离进行融合处理,以确定所述可移动平台与所述目标对象之间的目标距离。
可选地,所述处理器201实现确定所述可信指数是否大于预设可信指数之后,还用于实现:
若所述可信指数小于或等于预设可信指数,则根据多个所述第二距离确定所述可移动平台与所述目标对象之间的目标距离。
可选地,所述处理器201实现根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离时,用于实现:
从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对;
根据多个所述特征点匹配对,确定所述可移动平台与所述多个空间点之间的第二距离。
可选地,所述处理器201实现从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对时,用于实现:
确定所述目标对象在所述第一图像中的第一目标图像区域和在所述第二图像中的第二目标图像区域;
从所述第一目标图像区域和所述第二目标图像区域中确定所述目标对象上的多个空间点分别对应的特征点匹配对。
可选地,所述第一目标图像区域在所述第一图像中的位置和所述第二目标图像区域在所述第二图像中的位置根据所述视觉传感器和所述TOF测距装置之间的安装位置关系和所述第一距离确定。
可选地,所述处理器201实现确定所述目标对象在所述第一图像中的第一目标图像区域和在所述第二图像中的第二目标图像区域时,用于实现:
根据所述第一距离确定所述目标对象的三维位置坐标;
根据所述三维位置坐标,将所述目标对象投影到所述第一图像和第二图像中以确定所述第一目标图像区域和所述第二目标图像区域。
可选地,所述视觉传感器203包括单目视觉装置和双目视觉装置中的任一项。
可选地,若所述视觉传感器203为单目视觉装置,则所述第一图像和所述第二图像间隔预设时间;
若所述视觉传感器为双目视觉装置,则所述第一图像为所述双目视觉装置中的第一拍摄装置输出的图像,所述第二图像为所述双目视觉装置中的第二拍 摄装置输出的图像。
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的可移动平台的具体工作过程,可以参考前述距离确定方法实施例中的对应过程,在此不再赘述。
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的距离确定方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的可移动平台的内部存储单元,例如所述可移动平台的硬盘或内存。所述计算机可读存储介质也可以是所述可移动平台的外部存储设备,例如所述可移动平台上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (29)

  1. 一种距离确定方法,应用于可移动平台,其中,所述可移动平台包括视觉传感器和TOF测距装置,所述TOF测距装置包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置,其特征在于,所述方法包括:
    获取所述TOF测距装置采集到的所述可移动平台与所述目标对象之间的第一距离;
    获取所述视觉传感器输出的所述目标对象的第一图像和第二图像;
    根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离;
    根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
  2. 根据权利要求1所述的距离确定方法,其特征在于,所述根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离,包括:
    根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数;
    根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
  3. 根据权利要求2所述的距离确定方法,其特征在于,所述根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数,包括:
    根据多个所述第二距离和所述第一距离,从所述多个空间点中确定目标空间点,其中,所述目标空间点对应的所述第二距离与所述第一距离之间的差值小于或等于预设阈值;
    根据所述目标空间点确定所述第一距离的可信指数。
  4. 根据权利要求3所述的距离确定方法,其特征在于,所述根据所述目标空间点确定所述第一距离的可信指数,包括:
    根据所述目标空间点的数量和所述空间点的数量确定所述第一距离的可信指数。
  5. 根据权利要求3所述的距离确定方法,其特征在于,所述根据所述目标空间点确定所述第一距离的可信指数,包括:
    根据所述目标空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值;
    根据所述空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第二权重值;
    根据所述第一权重值和所述第二权重值确定第一距离的可信指数。
  6. 根据权利要求5所述的距离确定方法,其特征在于,所述根据所述目标空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值,包括:
    确定所述目标对象在所述第一图像中的第一目标图像区域或者在所述第二图像中的第二目标图像区域;
    根据所述目标空间点在所述第一目标图像区域或者第二目标图像区域中的对应特征点的像素坐标和像素值,确定第一权重值。
  7. 根据权利要求2至6中任一项所述的距离确定方法,其特征在于,所述根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离,包括:
    确定所述可信指数是否大于预设可信指数;
    若所述可信指数大于预设可信指数,则对多个所述第二距离和所述第一距离进行融合处理,以确定所述可移动平台与所述目标对象之间的目标距离。
  8. 根据权利要求7所述的距离确定方法,其特征在于,所述确定所述可信指数是否大于预设可信指数之后,还包括:
    若所述可信指数小于或等于预设可信指数,则根据多个所述第二距离确定所述可移动平台与所述目标对象之间的目标距离。
  9. 根据权利要求1至8中任一项所述的距离确定方法,其特征在于,所述根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离,包括:
    从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对;
    根据多个所述特征点匹配对,确定所述可移动平台与所述多个空间点之间的第二距离。
  10. 根据权利要求9所述的距离确定方法,其特征在于,所述从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对,包括:
    确定所述目标对象在所述第一图像中的第一目标图像区域和在所述第二图像中的第二目标图像区域;
    从所述第一目标图像区域和所述第二目标图像区域中确定所述目标对象上的多个空间点分别对应的特征点匹配对。
  11. 根据权利要求6或10所述的距离确定方法,其特征在于,所述第一目标图像区域在所述第一图像中的位置和所述第二目标图像区域在所述第二图像中的位置根据所述视觉传感器和所述TOF测距装置之间的安装位置关系和所述第一距离确定。
  12. 根据权利要求10所述的距离确定方法,其特征在于,所述确定所述目标对象在所述第一图像中的第一目标图像区域和在所述第二图像中的第二目标图像区域,包括:
    根据所述第一距离确定所述目标对象的三维位置坐标;
    根据所述三维位置坐标,将所述目标对象投影到所述第一图像和第二图像中以确定所述第一目标图像区域和所述第二目标图像区域。
  13. 根据权利要求1至12中任一项所述的距离确定方法,其特征在于,所述视觉传感器包括单目视觉装置和双目视觉装置中的任一项。
  14. 根据权利要求13所述的距离确定方法,其特征在于,若所述视觉传感器为单目视觉装置,则所述第一图像和所述第二图像间隔预设时间;
    若所述视觉传感器为双目视觉装置,则所述第一图像为所述双目视觉装置中的第一拍摄装置输出的图像,所述第二图像为所述双目视觉装置中的第二拍摄装置输出的图像。
  15. 一种可移动平台,其特征在于,所述可移动平台包括视觉传感器、TOF测距装置、存储器和处理器,所述处理器与所述视觉传感器和TOF测距装置连接;
    所述TOF测距装置包括用于发射光信号的发射装置和用于接收由目标对象反射的光信号的接收装置;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
    获取所述TOF测距装置采集到的所述可移动平台与所述目标对象之间的第一距离;
    获取所述视觉传感器输出的所述目标对象的第一图像和第二图像;
    根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离;
    根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
  16. 根据权利要求15所述的可移动平台,其特征在于,所述处理器实现根据多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离时,用于实现:
    根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数;
    根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离。
  17. 根据权利要求16所述的可移动平台,其特征在于,所述处理器实现根据多个所述第二距离和所述第一距离确定所述第一距离的可信指数时,用于实现:
    根据多个所述第二距离和所述第一距离,从所述多个空间点中确定目标空间点,其中,所述目标空间点对应的所述第二距离与所述第一距离之间的差值小于或等于预设阈值;
    根据所述目标空间点确定所述第一距离的可信指数。
  18. 根据权利要求17所述的可移动平台,其特征在于,所述处理器实现根据所述目标空间点确定所述第一距离的可信指数时,用于实现:
    根据所述目标空间点的数量和所述空间点的数量确定所述第一距离的可信指数。
  19. 根据权利要求17所述的可移动平台,其特征在于,所述处理器实现根据所述目标空间点确定所述第一距离的可信指数时,用于实现:
    根据所述目标空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值;
    根据所述空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第二权重值;
    根据所述第一权重值和所述第二权重值确定第一距离的可信指数。
  20. 根据权利要求19所述的可移动平台,其特征在于,所述处理器实现根据所述目标空间点在所述第一图像或者第二图像中的对应特征点的像素坐标和像素值,确定第一权重值时,用于实现:
    确定所述目标对象在所述第一图像中的第一目标图像区域或者在所述第二 图像中的第二目标图像区域;
    根据所述目标空间点在所述第一目标图像区域或者第二目标图像区域中的对应特征点的像素坐标和像素值,确定第一权重值。
  21. 根据权利要求16至20中任一项所述的可移动平台,其特征在于,所述处理器实现根据所述可信指数、多个所述第二距离和所述第一距离确定所述可移动平台与所述目标对象之间的目标距离时,用于实现:
    确定所述可信指数是否大于预设可信指数;
    若所述可信指数大于预设可信指数,则对多个所述第二距离和所述第一距离进行融合处理,以确定所述可移动平台与所述目标对象之间的目标距离。
  22. 根据权利要求21所述的可移动平台,其特征在于,所述处理器实现确定所述可信指数是否大于预设可信指数之后,还用于实现:
    若所述可信指数小于或等于预设可信指数,则根据多个所述第二距离确定所述可移动平台与所述目标对象之间的目标距离。
  23. 根据权利要求15至22中任一项所述的可移动平台,其特征在于,所述处理器实现根据所述第一图像和第二图像,确定所述可移动平台与所述目标对象上的多个空间点之间的第二距离时,用于实现:
    从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对;
    根据多个所述特征点匹配对,确定所述可移动平台与所述多个空间点之间的第二距离。
  24. 根据权利要求23所述的可移动平台,其特征在于,所述处理器实现从所述第一图像和第二图像中确定所述目标对象上的多个空间点分别对应的特征点匹配对时,用于实现:
    确定所述目标对象在所述第一图像中的第一目标图像区域和在所述第二图像中的第二目标图像区域;
    从所述第一目标图像区域和所述第二目标图像区域中确定所述目标对象上的多个空间点分别对应的特征点匹配对。
  25. 根据权利要求20或24所述的可移动平台,其特征在于,所述第一目标图像区域在所述第一图像中的位置和所述第二目标图像区域在所述第二图像中的位置根据所述视觉传感器和所述TOF测距装置之间的安装位置关系和所述第一距离确定。
  26. 根据权利要求24所述的可移动平台,其特征在于,所述处理器实现确 定所述目标对象在所述第一图像中的第一目标图像区域和在所述第二图像中的第二目标图像区域时,用于实现:
    根据所述第一距离确定所述目标对象的三维位置坐标;
    根据所述三维位置坐标,将所述目标对象投影到所述第一图像和第二图像中以确定所述第一目标图像区域和所述第二目标图像区域。
  27. 根据权利要求15至26中任一项所述的可移动平台,其特征在于,所述视觉传感器包括单目视觉装置和双目视觉装置中的任一项。
  28. 根据权利要求27所述的可移动平台,其特征在于,若所述视觉传感器为单目视觉装置,则所述第一图像和所述第二图像间隔预设时间;
    若所述视觉传感器为双目视觉装置,则所述第一图像为所述双目视觉装置中的第一拍摄装置输出的图像,所述第二图像为所述双目视觉装置中的第二拍摄装置输出的图像。
  29. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至14中任一项所述的距离确定方法。
PCT/CN2020/082199 2020-03-30 2020-03-30 距离确定方法、可移动平台及计算机可读存储介质 WO2021195886A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/082199 WO2021195886A1 (zh) 2020-03-30 2020-03-30 距离确定方法、可移动平台及计算机可读存储介质
CN202080005139.9A CN112771575A (zh) 2020-03-30 2020-03-30 距离确定方法、可移动平台及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082199 WO2021195886A1 (zh) 2020-03-30 2020-03-30 距离确定方法、可移动平台及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021195886A1 true WO2021195886A1 (zh) 2021-10-07

Family

ID=75699498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082199 WO2021195886A1 (zh) 2020-03-30 2020-03-30 距离确定方法、可移动平台及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112771575A (zh)
WO (1) WO2021195886A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110794861A (zh) * 2019-11-14 2020-02-14 国网山东省电力公司电力科学研究院 一种飞行上下线绝缘子串检测机器人自主落串方法及***

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269824B (zh) * 2021-05-28 2023-07-07 陕西工业职业技术学院 一种基于图像的距离确定方法及***
CN114396911B (zh) * 2021-12-21 2023-10-31 中汽创智科技有限公司 一种障碍物测距方法、装置、设备及存储介质
CN116990830B (zh) * 2023-09-27 2023-12-29 锐驰激光(深圳)有限公司 基于双目和tof的距离定位方法、装置、电子设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914262A (zh) * 2012-09-29 2013-02-06 北京控制工程研究所 一种基于附加视距的非合作目标贴近测量方法
CN105572681A (zh) * 2014-10-31 2016-05-11 洛克威尔自动控制安全公司 飞行时间传感器的绝对距离测量
CN107093195A (zh) * 2017-03-10 2017-08-25 西北工业大学 一种激光测距与双目相机结合的标记点定位方法
CN108037768A (zh) * 2017-12-13 2018-05-15 常州工学院 无人机避障控制***、避障控制方法和无人机
US10346995B1 (en) * 2016-08-22 2019-07-09 AI Incorporated Remote distance estimation system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999939B (zh) * 2012-09-21 2016-02-17 魏益群 坐标获取装置、实时三维重建***和方法、立体交互设备
CN106104203B (zh) * 2015-07-13 2018-02-02 深圳市大疆创新科技有限公司 一种移动物体的距离检测方法、装置及飞行器
CN107687841A (zh) * 2017-09-27 2018-02-13 中科创达软件股份有限公司 一种测距方法及装置
CN109902725A (zh) * 2019-01-31 2019-06-18 北京达佳互联信息技术有限公司 移动目标的检测方法、装置及电子设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914262A (zh) * 2012-09-29 2013-02-06 北京控制工程研究所 一种基于附加视距的非合作目标贴近测量方法
CN105572681A (zh) * 2014-10-31 2016-05-11 洛克威尔自动控制安全公司 飞行时间传感器的绝对距离测量
US10346995B1 (en) * 2016-08-22 2019-07-09 AI Incorporated Remote distance estimation system and method
CN107093195A (zh) * 2017-03-10 2017-08-25 西北工业大学 一种激光测距与双目相机结合的标记点定位方法
CN108037768A (zh) * 2017-12-13 2018-05-15 常州工学院 无人机避障控制***、避障控制方法和无人机

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110794861A (zh) * 2019-11-14 2020-02-14 国网山东省电力公司电力科学研究院 一种飞行上下线绝缘子串检测机器人自主落串方法及***

Also Published As

Publication number Publication date
CN112771575A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2021195886A1 (zh) 距离确定方法、可移动平台及计算机可读存储介质
KR102054455B1 (ko) 이종 센서 간의 캘리브레이션 장치 및 방법
CN109146947B (zh) 海洋鱼类三维图像获取及处理方法、装置、设备及介质
WO2019238127A1 (zh) 一种测距方法、装置及***
WO2021063128A1 (zh) 单相机环境中主动式刚体的位姿定位方法及相关设备
US20130322697A1 (en) Speed Calculation of a Moving Object based on Image Data
WO2018227576A1 (zh) 地面形态检测方法及***、无人机降落方法和无人机
CN112288825B (zh) 相机标定方法、装置、电子设备、存储介质和路侧设备
CN105627932A (zh) 一种基于双目视觉的测距方法及装置
CN112967344B (zh) 相机外参标定的方法、设备、存储介质及程序产品
CN113111513B (zh) 传感器配置方案确定方法、装置、计算机设备及存储介质
CN111739099B (zh) 预防跌落方法、装置及电子设备
CN110738703A (zh) 定位方法及装置、终端、存储介质
WO2022222291A1 (zh) 光轴检测***的光轴标定方法、装置、终端、***和介质
CN111798507A (zh) 一种输电线安全距离测量方法、计算机设备和存储介质
CN112686951A (zh) 用于确定机器人位置的方法、装置、终端及存储介质
CN113450334B (zh) 一种水上目标检测方法、电子设备及存储介质
CN117250956A (zh) 一种多观测源融合的移动机器人避障方法和避障装置
CN116929290A (zh) 一种双目视角差立体深度测量方法、***及存储介质
CN113959398B (zh) 基于视觉的测距方法、装置、可行驶设备及存储介质
WO2023273427A1 (zh) 复数相机测速方法及测速装置
Vaida et al. Automatic extrinsic calibration of LIDAR and monocular camera images
CN113112551B (zh) 相机参数的确定方法、装置、路侧设备和云控平台
CN113014899B (zh) 一种双目图像的视差确定方法、装置及***
CN112330726B (zh) 一种图像处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928232

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928232

Country of ref document: EP

Kind code of ref document: A1