WO2023015566A1 - Control method, control device, movable platform, and storage medium - Google Patents

Control method, control device, movable platform, and storage medium Download PDF

Info

Publication number
WO2023015566A1
WO2023015566A1 PCT/CN2021/112546 CN2021112546W WO2023015566A1 WO 2023015566 A1 WO2023015566 A1 WO 2023015566A1 CN 2021112546 W CN2021112546 W CN 2021112546W WO 2023015566 A1 WO2023015566 A1 WO 2023015566A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
key frame
pose
current
matching
Prior art date
Application number
PCT/CN2021/112546
Other languages
French (fr)
Chinese (zh)
Inventor
高成强
吴博
许可
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202180006263.1A priority Critical patent/CN114730471A/en
Priority to PCT/CN2021/112546 priority patent/WO2023015566A1/en
Publication of WO2023015566A1 publication Critical patent/WO2023015566A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to the technical field of control, and in particular to a control method, a control device, a movable platform and a non-volatile computer-readable storage medium.
  • mobile platforms such as unmanned aerial vehicles, unmanned vehicles, etc.
  • predetermined tasks such as watering etc.
  • the movable platform moves it may deviate from the fixed trajectory due to obstacles and other factors. Therefore, it is necessary to output the current target pose of the movable platform at all times to adjust the pose of the movable platform.
  • factors such as weak GPS signal and pose calculation failure may lead to a decrease in the output frame rate of the pose, so that the target pose output by the current frame may be the target pose that the movable platform should be in a few frames ago, and the accuracy of pose adjustment lower.
  • Embodiments of the present application provide a control method, a control device, a movable platform, and a storage medium.
  • the control method of the embodiment of the present application includes obtaining the current pose and current image collected by the movable platform; obtaining map points matching the current image; generating compensation parameters according to the current pose and corrected pose, and the corrected The pose is determined according to the map points; and the pose of the movable platform is corrected within a predetermined time period according to the compensation parameters.
  • the control device of the embodiment of the present application is applied to a mobile platform, and the control device includes a processor and a memory, the memory is used to store instructions, and the processor invokes the instructions stored in the memory to implement the following operations: obtain The current pose and the current image collected by the movable platform; obtaining map points matching the current image; generating compensation parameters according to the corrected pose calculated by the current pose and the map points; and The compensation parameter is used to correct the current pose of the movable platform within a predetermined time period.
  • the movable platform in the embodiment of the present application includes a camera, a pose detection device and a control device.
  • the camera is used to collect the current image
  • the pose detection device is used to collect the current pose of the movable platform
  • the control device includes a processor
  • the processor is used to obtain the current pose and the current pose. image; acquire a map point matching the current image; generate a compensation parameter according to the current pose and the corrected pose calculated by the map point; and correct the movable platform for a predetermined duration according to the compensation parameter The current pose in .
  • the non-transitory computer-readable storage medium in the embodiment of the present application includes a computer program.
  • the processors are caused to execute the control method.
  • the control method includes obtaining a current pose and a current image collected by a movable platform; obtaining a map point matching the current image; generating compensation parameters according to the current pose and a corrected pose, and the corrected pose is based on determining the map points; and correcting the pose of the movable platform within a predetermined time period according to the compensation parameters.
  • control device by obtaining the current pose and current image collected by the movable platform, the map points matching the current image are calculated, so that Calculate the corrected pose according to the map points, so as to determine the compensation parameters.
  • adjusting the movable platform to the target pose is susceptible to the influence of the output frame rate of the target pose, which leads to the accurate reduction of the pose adjustment.
  • the deviation of multi-frame poses within a predetermined period of time can be prevented from accumulating, and the accuracy of pose adjustment is high.
  • FIG. 1 is a schematic structural diagram of a mobile platform provided by an embodiment of the present application.
  • Fig. 2 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 3 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 4 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 5 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 6 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 8 is a schematic flowchart of a control method provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of connection between a processor and a computer-readable storage medium provided by an embodiment of the present application.
  • connection means two or more, unless otherwise specifically defined.
  • connection should be interpreted in a broad sense, for example, it can be a mechanical connection or an electrical connection Or they can communicate with each other; they can be directly connected, or indirectly connected through an intermediary, and can be internal communication between two components or an interaction relationship between two components.
  • one or more drones fly along a predetermined flight path to complete tasks such as watering, spraying pesticides, etc. in the area where the crops of the farm are located.
  • the UAV performs a one-button short film fade-away task; therefore, in order to ensure that the UAV does not deviate from the predetermined flight route when performing a fixed-trajectory task, when performing a one-key short film fade-away task, the starting point and the return point are consistent, and it is necessary to detect The pose of the human-machine at any position, so as to realize the pose adjustment.
  • the embodiment of the present application provides a control method applied to the control device 100, the control method includes:
  • the embodiment of the present application also provides a movable platform 1000 , including a camera 200 , a pose detection device 300 and a control device 100 .
  • the camera 200 is used to collect the current image
  • the posture detection device 300 is used to collect the current posture of the movable platform 1000
  • the control device 100 includes a processor 10 and a memory 20
  • the memory 20 is used to store instructions
  • the processor 10 calls the stored information in the memory 20.
  • the instruction is used to obtain the current pose and the current image; obtain the map point matching the current image; generate compensation parameters according to the corrected pose calculated according to the current pose and the map point; and correct the movable platform 1000 according to the compensation parameter for a predetermined time current pose in . That is to say, step 011 , step 012 , step 013 and step 014 may be implemented by the processor 10 .
  • the movable platform 1000 includes unmanned aerial vehicles, unmanned vehicles, or ground remote-controlled robots. Taking the movable platform 1000 as an unmanned aerial vehicle and performing repetitive trajectory tasks (such as flying along a predetermined trajectory) as an example, the unmanned When the drone is moving, the camera 200 mounted on the drone captures the current image in real time, and the camera 200 can shoot towards the ground.
  • the optical axis of the camera 200 is vertical to the ground to capture a wider range of ground images; the pose detection device 300 detects the current pose of the drone in real time, and the pose detection device 300 can include a GPS positioning module and an attitude detection module (such as an inertial measurement unit (Inertial Measurement Unit, IMU)), and the GPS positioning module can obtain the current pose of the drone.
  • the three-dimensional position coordinates and attitude detection module can obtain the attitude of the drone, such as the pitch angle, roll angle and yaw angle of the drone.
  • the acquisition frame rate of the current pose and the current image can be the same or different.
  • the acquisition frame rate of the current image refers to the number of image frames acquired per second, and the acquisition frame rate of the current pose refers to the number of detected 3D position coordinates and attitudes per second.
  • the acquisition frame rate of the current pose and the current image is the same, so the current pose and the current image can be in one-to-one correspondence.
  • the acquisition frame rate of the current pose and the current image is different.
  • the current pose with the smallest time difference between the acquisition time and the acquisition time of each frame of the current image can be One-to-one correspondence; or, if the acquisition frame rate of the current pose is less than the acquisition frame rate of the current image, the acquisition time can be one-to-one correspondence with the current image with the smallest time difference between the acquisition time of the current pose of each frame, so that the current There is a one-to-one correspondence between the image and the current pose, which facilitates subsequent calculations.
  • the processor 10 After the processor 10 acquires the current image each time, it matches the current image with the map database of the area where the pre-established repeated trajectory is located, thereby determining the map points in the map database that match the current image, such as determining the current image in the map database.
  • the matching image of the image matching so as to obtain the map point corresponding to the matching image.
  • the map points can indicate the pose that the UAV should be in when the UAV is located at some feature points on the map when the map database is established. Therefore, after obtaining the map points matched by the current image, the corrected pose can be calculated according to the map points matched by the current image. For example, based on the PnP algorithm, the corrected pose can be calculated through the map points matched by the current image. In addition, the corrected pose can also be optimized through the objective function to minimize the reprojection error.
  • the objective function is as follows:
  • T1 represents the corrected pose
  • n i is one of the feature point sets F c of the current image
  • N ij is the observation point of the map point M j on the ith current image
  • ⁇ (T1M j ) is the pinhole camera
  • the projection model of refers to projecting a 3D point onto the image plane to form a projection point. In this way, the reprojection error correction can be performed on the three-dimensional position coordinates and attitude information of the corrected pose, and the accuracy of the pose correction can be improved.
  • Compensation parameters are then calculated based on the difference between the current pose and the corrected pose. For example, a mapping function between the current pose and the calibration pose is established as a compensation parameter.
  • the processor 10 corrects the current pose based on the compensation parameters, so that the current pose is in the corrected pose within a predetermined period of time.
  • the predetermined period of time can be 2 frames, 3 frames, 4 frames, 10 frames, etc.
  • the duration of the frame may be the same as the duration of each frame corresponding to the number of acquired frames of the current image or current pose. Even if the output frame rate of the corrected pose decreases due to weak GPS signals, failure to calculate the corrected pose, etc., since the pose adjustment is performed according to the compensation parameters, it can prevent the continuous accumulation of multi-frame pose deviations within a predetermined period of time, and the position The accuracy of posture adjustment is high.
  • the control device 100 and the movable platform 1000 of the embodiment of the present application by obtaining the current pose and the current image collected by the movable platform 1000, the map points matching the current image are calculated, so as to calculate the corrected pose according to the map points , so as to determine the compensation parameters.
  • the compensation parameters Compared with directly calculating the target pose and adjusting the movable platform 1000 to the target pose, it is easy to be affected by the output frame rate of the target pose and cause the pose adjustment to be accurately reduced.
  • the pose within the predetermined time length can prevent the deviation of multi-frame poses within the predetermined time length from accumulating continuously, and the accuracy of pose adjustment is high.
  • step 012 also includes:
  • 0121 Obtain the matching image matching the current image and the map points corresponding to the matching image
  • 0122 Perform feature matching on the current image and the matching image to determine matching feature points in the matching image that match the current image;
  • the instructions stored in the memory 20 by the processor 10 are also used to obtain a matching image matching the current image and map points corresponding to the matching image; perform feature matching on the current image and the matching image to determine the matching image in the matching image.
  • the matching image matched with the current image may first be obtained in the map database, the matching image includes a plurality of feature points, and each feature point corresponds to a map point, the current image and Matching image matching can be that the similarity between the current image and the matching image is greater than a predetermined similarity (such as 80%, 90%, etc.), therefore, there may be feature points that do not match the current image in the matching image.
  • the matching feature points in the matching image that match the current image can be determined, so as to determine the map points corresponding to the matching feature points.
  • the map point matching the current image ie, the map point corresponding to the matching feature point
  • the map point can include the feature point and the pose information corresponding to the feature point.
  • the current pose includes first position information and first attitude information
  • the first attitude information includes a first pitch angle, a first roll angle, and a first yaw angle
  • the correction pose includes the second position information and the second attitude information
  • the second attitude information includes the second pitch angle, the second roll angle and the second yaw angle
  • step 013 includes:
  • 0132 Calculate compensation parameters according to the first position information, the first attitude information, the second position information, and the replaced second attitude information.
  • the processor 10 invokes the instructions stored in the memory 20 to replace the second pitch angle and the second roll angle with the first pitch angle and the first roll angle, respectively, so as to generate the replaced first 2. Attitude information; and calculating compensation parameters according to the first position information, the first attitude information, the second position information, and the replaced second attitude information. That is to say, step 0131 and step 0132 may be implemented by the processor 10 .
  • the corrected pose can be
  • the second pitch angle and the second roll angle are respectively replaced by the first pitch angle and the first roll angle collected by the IMU to generate the replaced second attitude information;
  • the roll angle is basically unchanged.
  • the compensation parameters are calculated according to the first position information of the current pose, the first attitude information, the second position information of the corrected pose, and the replaced second attitude information.
  • the processor 10 can calculate the compensation parameter according to the following formula:
  • Compensation parameters can include multiple, each compensation parameter corresponds to a pose parameter, the three-dimensional position coordinates include x, y, z three position parameters, the current attitude includes roll angle (ROLL), pitch angle (PITCH) and yaw angle (YAW) Three attitude parameters, that is to say, the position coordinates in the current pose can be corrected through the position parameters, and the attitude in the current pose can be corrected through the attitude parameters. In this way, the introduction of PITCH and ROLL errors is prevented, and the accuracy of pose correction is improved.
  • control method also includes:
  • the processor 10 invokes the instructions stored in the memory 20 to establish a map database according to the multi-frame captured images captured by the mobile platform 1000 and the pose information corresponding to the captured images. That is to say, step 015 may be implemented by the processor 10 .
  • the UAV moves on the path corresponding to the repeated trajectory task, and controls the camera 200 to continuously collect and collect images and pose information.
  • the collected images and pose information can be in one-to-one correspondence, which is convenient for subsequent calculation of map points.
  • the processor 10 can fuse the multi-frame acquisition images collected during the movement to generate a global map, and there are corresponding pose information in different areas of the global map, so as to realize To establish the global map, the map database can be established according to the global map and the pose information corresponding to the global map.
  • step 015 includes:
  • 0151 Obtain the pose information of the movable platform 1000 when collecting and collecting images, so as to bind the collected images and pose information;
  • 0154 Build a map database based on collected images and map points.
  • the instruction stored in the memory 20 by the processor 10 is also used to obtain the pose information of the movable platform 1000 when the collected image is collected, so as to bind the collected image and pose information; identify the feature points of the collected image ; Calculate the map points according to the feature points and the pose information corresponding to the feature points; and establish a map database according to the collected images and map points. That is to say, step 0151 , step 0152 , step 0153 and step 0154 can be implemented by the processor 10 .
  • the pose information detected by the pose detection device 300 can be bound when the captured image is acquired, so as to realize the one-to-one correspondence between the collected image and the pose information.
  • the processor 10 identifies the feature points in the collected images, and calculates the map points according to the feature points and the pose information corresponding to the feature points.
  • the pose information corresponding to the feature points of the same object is used to calculate the map points corresponding to the feature points.
  • the multi-frame acquisition images are fused to generate a global map, and the map points corresponding to each feature point in the global map are determined.
  • the global map, The feature points and map points corresponding to the feature points are associated and stored to generate a map database.
  • step 0153 includes:
  • 01531 Determine the key frame image and non-key frame image in the multi-frame acquisition image, and the key frame image is any frame in the multi-frame acquisition image;
  • 01532 Perform feature matching on the key frame image and the non-key frame image, so as to obtain the first feature point in the key frame image whose number of matching successes is greater than a predetermined number;
  • 01533 Calculate the third position information according to the first feature point and the second feature point in the non-key frame image that successfully matches the first feature point;
  • 01534 Generate map points according to the third position information and the pose information corresponding to the first feature point.
  • the instruction stored in the memory 20 by the processor 10 is also used to determine the key frame image and the non-key frame image in the multi-frame captured image, and the key frame image is any frame in the multi-frame captured image;
  • the key frame image is matched with the non-key frame image to obtain the first feature point whose number of successful matches is greater than a predetermined number of times in the key frame image; according to the first feature point and the first feature point in the non-key frame image second feature points, calculating third position information; and generating map points according to the third position information and pose information corresponding to the first feature points. That is to say, step 01531 , step 01532 , step 01533 and step 01534 can be implemented by the processor 10 .
  • the key points can be determined in the continuous multi-frame acquisition images Frame image and non-key frame image, the key frame can be any frame in the continuous multi-frame acquisition image.
  • the feature point of the key frame is the first feature point whose matching success times are greater than the predetermined number of times, wherein the value of the predetermined number is equal to the value of the predetermined number of times, and the predetermined number can be 3, 4, 5, etc. , the more the predetermined number, the higher the accuracy of the calculated map points.
  • the first feature point and the corresponding multiple second feature points can be used to calculate the second feature point.
  • the third position information of a feature point for example, the processor 10 can calculate the third position information according to the PnP algorithm, and then the processor 10 can generate a map point according to the third position information and pose information corresponding to the first feature point, which can be It is understood that the three-dimensional position coordinates in the pose information are the three-dimensional position coordinates of the center of the corresponding captured image, and the first feature point may not be in the center of the captured image. Therefore, the third position information can be used to replace the three-dimensional position coordinates in the pose information. Position coordinates, so as to generate the map point corresponding to the first feature point. In this way, the map points corresponding to all the first feature points in the key frame image can be obtained.
  • control method further includes:
  • Step 0121 includes:
  • 01211 Obtain the key frame image matching the current image in the bag of words database as the matching image.
  • 01212 Obtain the map point corresponding to the first feature point included in the matching image.
  • the instruction stored in the memory 20 by the processor 10 is also used to establish a bag-of-words database according to the key-frame image; obtain a key-frame image matching the current image in the bag-of-words database as a matching image; and Obtain the map point corresponding to the first feature point included in the matching image. That is to say, step 016 , step 01211 and step 01212 may be implemented by the processor 10 .
  • a bag-of-words database may be established according to the key frame image, for feature matching with the current image.
  • the feature points of the current image can be identified, and then the key frame image with the highest similarity with the feature points of the current image is found in the bag-of-words database as the matching image, and then the key frame image in the matching image is obtained from the map database.
  • the map point corresponding to the first feature point can obtain the map point matching the current image.
  • the key frame images and corresponding map points acquired within a predetermined time can be optimized, specifically through an optimization function, such as inputting key frame images and corresponding map points into the optimization function, and outputting the optimization function
  • the key frame image and the corresponding map point with the minimum value are output, so as to realize the optimization of the key frame image and the corresponding map point.
  • the optimization function is as follows:
  • KFi represents a frame in the key frame set KF set within the predetermined time
  • M j is one of the map point set M set within the predetermined time
  • N ij is the observation of the map point M j on the ith current image point
  • ⁇ (R i M j +t j ) is the projection model of the pinhole camera, which refers to projecting a three-dimensional point onto the image plane to form a projection point
  • R i , t i represent the map point M j from the world coordinate system Transformed to the mapping relationship of the image coordinate system
  • Eij is the two-norm error between the projection point and the observation point, is the output value of the optimization function.
  • the starting point and the end point of the trajectory may be at the same location, and when the UAV reaches the end point, there may be a deviation from the starting point. Therefore, when the current When the key frame image is used, the current key frame image will be matched with the stored key frame image in the bag of words database to determine the similarity between the current key frame image and the stored key frame image. If there is a similarity greater than similar degree threshold (such as 90%, 95%, etc.), then it can be determined that the UAV has returned to the end point, but there is a certain deviation in the position.
  • similar degree threshold such as 90%, 95%, etc.
  • the stored key frame image is used to correct the current key frame image, such as according to the mapping relationship between the key frame image whose similarity is greater than the preset similarity threshold and the current key frame image, the key frame image is corrected, and according to the bag of words database In the mapping relationship between the map point corresponding to the key frame image whose similarity is greater than the preset similarity threshold and the map point corresponding to the current key frame image, the map point corresponding to the current key frame image is corrected. For example, replace the current key frame image with a stored key frame image whose similarity is greater than the similarity threshold, and replace the map point corresponding to the current key frame image with the map point corresponding to the stored key frame image whose similarity is greater than the similarity threshold .
  • the embodiment of the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program 402.
  • the processors 10 execute the above-mentioned The control method of any embodiment.
  • the processors 10 are made to perform the following steps:
  • 011 Obtain the current pose and current image collected by the movable platform
  • the processors 10 are made to perform the following steps:
  • 0121 Obtain the matching image matching the current image and the map points corresponding to the matching image
  • 0122 Perform feature matching on the current image and the matching image to determine matching feature points in the matching image that match the current image;
  • the schematic diagrams corresponding to the various embodiments include the time sequence of executing actions, which is only an exemplary description. According to needs, the time sequence before each executing action can be changed. At the same time, there is no contradiction between the various embodiments. In the case of conflicts, one or more embodiments may be combined or split to adapt to different application scenarios, and details are not described here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A control method, a control device, a movable platform, and a storage medium. The control method comprises: (011): obtaining a current pose and a current image collected by a movable platform; (012): obtaining map points matching the current image; (013): generating compensation parameters according to the current pose and a corrected pose, wherein the corrected pose is determined according to the map points; and (014): correcting the pose of the movable platform within a predetermined duration according to the compensation parameters.

Description

控制方法、控制装置、可移动平台及存储介质Control method, control device, movable platform and storage medium 技术领域technical field
本申请涉及控制技术领域,特别涉及一种控制方法、控制装置、可移动平台和非易失性计算机可读存储介质。The present application relates to the technical field of control, and in particular to a control method, a control device, a movable platform and a non-volatile computer-readable storage medium.
背景技术Background technique
随着技术的进步,可移动平台(如无人机、无人车等),正在替代人类进行更多的生产任务,如可移动平台沿固定轨迹移动,以执行监测、执行预定任务(如洒水等)等,可移动平台移动时,可能因为障碍物等因素,偏移固定轨迹,因此,需要时刻输出可移动平台当前应处于的目标位姿,以对可移动平台的位姿进行调整,然而,GPS信号弱、位姿计算失败等因素可能导致位姿的输出帧率降低,使得当前帧输出的目标位姿可能是几帧前可移动平台应处于的目标位姿,位姿调整的准确性较低。With the advancement of technology, mobile platforms (such as unmanned aerial vehicles, unmanned vehicles, etc.) are replacing humans to perform more production tasks, such as mobile platforms moving along a fixed trajectory to perform monitoring and perform predetermined tasks (such as watering etc.), etc., when the movable platform moves, it may deviate from the fixed trajectory due to obstacles and other factors. Therefore, it is necessary to output the current target pose of the movable platform at all times to adjust the pose of the movable platform. However , factors such as weak GPS signal and pose calculation failure may lead to a decrease in the output frame rate of the pose, so that the target pose output by the current frame may be the target pose that the movable platform should be in a few frames ago, and the accuracy of pose adjustment lower.
发明内容Contents of the invention
本申请的实施例提供一种控制方法、控制装置、可移动平台和存储介质。Embodiments of the present application provide a control method, a control device, a movable platform, and a storage medium.
本申请实施例的控制方法包括获取可移动平台采集的当前位姿及当前图像;获取与所述当前图像匹配的地图点;根据所述当前位姿和校正位姿,生成补偿参数,所述校正位姿根据所述地图点确定;及根据所述补偿参数校正所述可移动平台在预定时长内的位姿。The control method of the embodiment of the present application includes obtaining the current pose and current image collected by the movable platform; obtaining map points matching the current image; generating compensation parameters according to the current pose and corrected pose, and the corrected The pose is determined according to the map points; and the pose of the movable platform is corrected within a predetermined time period according to the compensation parameters.
本申请实施例的控制装置,应用于可移动平台,所述控制装置包括处理器和存储器,所述存储器用于存储指令,所述处理器调用所述存储器存储的指令用于实现以下操作:获取所述可移动平台采集的当前位姿及当前图像;获取与所述当前图像匹配的地图点;根据所述当前位姿和所述地图点计算得到的校正位姿,生成补偿参数;及根据所述补偿参数校正所述可移动平台在预定时长内的所述当前位姿。The control device of the embodiment of the present application is applied to a mobile platform, and the control device includes a processor and a memory, the memory is used to store instructions, and the processor invokes the instructions stored in the memory to implement the following operations: obtain The current pose and the current image collected by the movable platform; obtaining map points matching the current image; generating compensation parameters according to the corrected pose calculated by the current pose and the map points; and The compensation parameter is used to correct the current pose of the movable platform within a predetermined time period.
本申请实施例的可移动平台包括相机、位姿检测装置和控制装置。所述相机用于采集当前图像,所述位姿检测装置用于采集所述可移动平台的当前姿态,所述控制装置包括处理器,所述处理器用于获取所述当前位姿及所述当前图像;获取与所述当前图像匹配的地图点;根据所述当前位姿和所述地图点计算得到的校正位姿,生成补偿参数;及根据所述补偿参数校正所述可移动平台在预定时长内的所述当前位姿。The movable platform in the embodiment of the present application includes a camera, a pose detection device and a control device. The camera is used to collect the current image, the pose detection device is used to collect the current pose of the movable platform, the control device includes a processor, and the processor is used to obtain the current pose and the current pose. image; acquire a map point matching the current image; generate a compensation parameter according to the current pose and the corrected pose calculated by the map point; and correct the movable platform for a predetermined duration according to the compensation parameter The current pose in .
本申请实施例的非易失性计算机可读存储介质包括计算机程序,当所述计算机程序被一个或多个处理器执行时,使得所述处理器执行控制方法。所述控制方法包括获取可移动平台采集的当前位姿及当前图像;获取与所述当前图像匹配的地图点;根据所述当前位姿 和校正位姿,生成补偿参数,所述校正位姿根据所述地图点确定;及根据所述补偿参数校正所述可移动平台在预定时长内的位姿。The non-transitory computer-readable storage medium in the embodiment of the present application includes a computer program. When the computer program is executed by one or more processors, the processors are caused to execute the control method. The control method includes obtaining a current pose and a current image collected by a movable platform; obtaining a map point matching the current image; generating compensation parameters according to the current pose and a corrected pose, and the corrected pose is based on determining the map points; and correcting the pose of the movable platform within a predetermined time period according to the compensation parameters.
本申请实施例的控制方法、控制装置、可移动平台和非易失性计算机可读存储介质中,通过获取可移动平台采集的当前位姿和当前图像,来计算与当前图像匹配地图点,从而根据地图点计算校正位姿,从而确定补偿参数,相较于直接计算目标位姿,将可移动平台调整到目标位姿,易受到目标位姿的输出帧率影响导致位姿调整准确地降低而言,通过补偿参数调整预定时长内的位姿,可防止预定时长内的多帧位姿的偏差不断累积,位姿调整的准确性较高。In the control method, control device, movable platform, and non-volatile computer-readable storage medium of the embodiments of the present application, by obtaining the current pose and current image collected by the movable platform, the map points matching the current image are calculated, so that Calculate the corrected pose according to the map points, so as to determine the compensation parameters. Compared with directly calculating the target pose, adjusting the movable platform to the target pose is susceptible to the influence of the output frame rate of the target pose, which leads to the accurate reduction of the pose adjustment. In other words, by adjusting the pose within a predetermined period of time through the compensation parameters, the deviation of multi-frame poses within a predetermined period of time can be prevented from accumulating, and the accuracy of pose adjustment is high.
附图说明Description of drawings
图1是本申请实施例提供的可移动平台的结构示意图。FIG. 1 is a schematic structural diagram of a mobile platform provided by an embodiment of the present application.
图2是本申请实施例提供的控制方法的流程示意图。Fig. 2 is a schematic flowchart of a control method provided by an embodiment of the present application.
图3是本申请实施例提供的控制方法的流程示意图。Fig. 3 is a schematic flowchart of a control method provided by an embodiment of the present application.
图4是本申请实施例提供的控制方法的流程示意图。Fig. 4 is a schematic flowchart of a control method provided by an embodiment of the present application.
图5是本申请实施例提供的控制方法的流程示意图。Fig. 5 is a schematic flowchart of a control method provided by an embodiment of the present application.
图6是本申请实施例提供的控制方法的流程示意图。Fig. 6 is a schematic flowchart of a control method provided by an embodiment of the present application.
图7是本申请实施例提供的控制方法的流程示意图。Fig. 7 is a schematic flowchart of a control method provided by an embodiment of the present application.
图8是本申请实施例提供的控制方法的流程示意图。Fig. 8 is a schematic flowchart of a control method provided by an embodiment of the present application.
图9是本申请实施例提供的处理器和计算机可读存储介质的连接示意图。Fig. 9 is a schematic diagram of connection between a processor and a computer-readable storage medium provided by an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施方式,实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。Embodiments of the present application are described in detail below, and examples of the embodiments are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary, are only for explaining the present application, and should not be construed as limiting the present application.
在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。In the description of the present application, "plurality" means two or more, unless otherwise specifically defined. In the description of this application, it should be noted that, unless otherwise clearly specified and limited, the terms "installation", "connection" and "connection" should be interpreted in a broad sense, for example, it can be a mechanical connection or an electrical connection Or they can communicate with each other; they can be directly connected, or indirectly connected through an intermediary, and can be internal communication between two components or an interaction relationship between two components. Those of ordinary skill in the art can understand the specific meanings of the above terms in this application according to specific situations.
目前,无人机有很多应用无需用户手动操作,能够通过自动飞行实现。例如,一个或多个无人机沿预定飞行路线飞行,以完成如在农场的农作物所在的区域进行洒水任务、喷 农药任务等。再例如,无人机执行一键短片渐远任务;因此,为了保证无人机执行固定轨迹任务时不偏离预定飞行路线,执行一键短片渐远任务时,起点和返回点一致,需要检测无人机在任一位置时的位姿,从而实现位姿调整,然而,GPS信号弱、位姿计算失败等因素导致无人机的位姿输出帧率降低,输出的位姿可能并不是当前帧应处于的位姿,而是几帧前的无人机应处于的位姿,位姿调整准确性较差。At present, there are many applications of drones that can be realized through automatic flight without manual operation by users. For example, one or more drones fly along a predetermined flight path to complete tasks such as watering, spraying pesticides, etc. in the area where the crops of the farm are located. For another example, the UAV performs a one-button short film fade-away task; therefore, in order to ensure that the UAV does not deviate from the predetermined flight route when performing a fixed-trajectory task, when performing a one-key short film fade-away task, the starting point and the return point are consistent, and it is necessary to detect The pose of the human-machine at any position, so as to realize the pose adjustment. However, factors such as weak GPS signal and pose calculation failure lead to a decrease in the output frame rate of the pose of the UAV, and the output pose may not be what the current frame should be. The pose that the drone is in is the pose that the drone should be in a few frames ago, and the accuracy of pose adjustment is poor.
请参阅图1和图2,本申请实施例提供一种控制方法,应用于控制装置100,该控制方法包括:Please refer to Fig. 1 and Fig. 2, the embodiment of the present application provides a control method applied to the control device 100, the control method includes:
011:获取可移动平台1000采集的当前位姿及当前图像;011: Obtain the current pose and current image collected by the movable platform 1000;
012:获取与当前图像匹配的地图点;012: Obtain map points matching the current image;
013:根据当前位姿和校正位姿,生成补偿参数,校正位姿根据地图点确定;及013: Generate compensation parameters based on the current pose and the corrected pose, and the corrected pose is determined based on the map points; and
014:根据补偿参数校正可移动平台1000在预定时长内的位姿。014: Correct the pose of the movable platform 1000 within a predetermined time period according to the compensation parameters.
本申请实施例还提供一种可移动平台1000,包括相机200、位姿检测装置300和控制装置100。相机200用于采集当前图像,位姿检测装置300用于采集可移动平台1000的当前姿态,控制装置100包括处理器10和存储器20,存储器20用于存储指令,处理器10调用存储器20存储的指令用于获取当前位姿及当前图像;获取与当前图像匹配的地图点;根据当前位姿和地图点计算得到的校正位姿,生成补偿参数;及根据补偿参数校正可移动平台1000在预定时长内的当前位姿。也即是说,步骤011、步骤012、步骤013和步骤014可以由处理器10实现。The embodiment of the present application also provides a movable platform 1000 , including a camera 200 , a pose detection device 300 and a control device 100 . The camera 200 is used to collect the current image, the posture detection device 300 is used to collect the current posture of the movable platform 1000, the control device 100 includes a processor 10 and a memory 20, the memory 20 is used to store instructions, and the processor 10 calls the stored information in the memory 20. The instruction is used to obtain the current pose and the current image; obtain the map point matching the current image; generate compensation parameters according to the corrected pose calculated according to the current pose and the map point; and correct the movable platform 1000 according to the compensation parameter for a predetermined time current pose in . That is to say, step 011 , step 012 , step 013 and step 014 may be implemented by the processor 10 .
具体地,可移动平台1000包括无人机、无人驾驶车辆、或地面遥控机器人,以可移动平台1000为无人机,并执行重复轨迹任务(如沿预定轨迹飞行)为例,在无人机移动时,搭载在无人机上的相机200实时拍摄当前图像,相机200可朝向地面进行拍摄,如相机200的光轴垂直地面进行拍摄,从而获取更大范围的地面的图像;位姿检测装置300实时检测无人机的当前位姿,位姿检测装置300可包括GPS定位模块和姿态检测模块(如惯性测量单元(Inertial Measurement Unit,IMU)),GPS定位模块可获取到无人机当前的三维位置坐标,姿态检测模块可获取无人机的姿态,如无人机的俯仰角、横滚角和偏航角。Specifically, the movable platform 1000 includes unmanned aerial vehicles, unmanned vehicles, or ground remote-controlled robots. Taking the movable platform 1000 as an unmanned aerial vehicle and performing repetitive trajectory tasks (such as flying along a predetermined trajectory) as an example, the unmanned When the drone is moving, the camera 200 mounted on the drone captures the current image in real time, and the camera 200 can shoot towards the ground. For example, the optical axis of the camera 200 is vertical to the ground to capture a wider range of ground images; the pose detection device 300 detects the current pose of the drone in real time, and the pose detection device 300 can include a GPS positioning module and an attitude detection module (such as an inertial measurement unit (Inertial Measurement Unit, IMU)), and the GPS positioning module can obtain the current pose of the drone. The three-dimensional position coordinates and attitude detection module can obtain the attitude of the drone, such as the pitch angle, roll angle and yaw angle of the drone.
当前位姿和当前图像的采集帧率可相同或不同。当前图像的采集帧率指的是每秒内采集的图像帧数,当前位姿的采集帧率指的是每秒内检测得到的三维位置坐标和姿态的数量。当前位姿和当前图像的采集帧率相同,则当前位姿和当前图像可一一对应。当前位姿和当前图像的采集帧率不相同,如当前位姿的采集帧率大于当前图像的采集帧率时,则可将采集时间与每帧当前图像的采集时间的时间差最小的当前位姿一一对应;或者,如当前位姿的采集帧率小于当前图像的采集帧率时,则可将采集时间与每帧当前位姿的采集时间的时间差最小的当前图像一一对应,从而使得当前图像和当前位姿一一对应,方便后续的计算。The acquisition frame rate of the current pose and the current image can be the same or different. The acquisition frame rate of the current image refers to the number of image frames acquired per second, and the acquisition frame rate of the current pose refers to the number of detected 3D position coordinates and attitudes per second. The acquisition frame rate of the current pose and the current image is the same, so the current pose and the current image can be in one-to-one correspondence. The acquisition frame rate of the current pose and the current image is different. If the acquisition frame rate of the current pose is greater than the acquisition frame rate of the current image, the current pose with the smallest time difference between the acquisition time and the acquisition time of each frame of the current image can be One-to-one correspondence; or, if the acquisition frame rate of the current pose is less than the acquisition frame rate of the current image, the acquisition time can be one-to-one correspondence with the current image with the smallest time difference between the acquisition time of the current pose of each frame, so that the current There is a one-to-one correspondence between the image and the current pose, which facilitates subsequent calculations.
处理器10在每次获取到当前图像后,将当前图像与预先建立的重复轨迹所在的区域的地图数据库进行匹配,从而确定地图数据库中与当前图像匹配的地图点,如在地图数据库中确定当前图像匹配的匹配图像,从而获取到匹配图像对应的地图点。After the processor 10 acquires the current image each time, it matches the current image with the map database of the area where the pre-established repeated trajectory is located, thereby determining the map points in the map database that match the current image, such as determining the current image in the map database. The matching image of the image matching, so as to obtain the map point corresponding to the matching image.
地图点可指示无人机在建立地图数据库时,无人机位于地图上的一些特征点时,应处于的位姿。因此,在获取到当前图像匹配的地图点后,可根据当前图像匹配的地图点计算校正位姿。例如,基于PnP算法,通过当前图像匹配的地图点,即可计算得到校正位姿。另外,还可通过目标函数来对校正位姿进行优化,以最小化重投影误差,目标函数如下:The map points can indicate the pose that the UAV should be in when the UAV is located at some feature points on the map when the map database is established. Therefore, after obtaining the map points matched by the current image, the corrected pose can be calculated according to the map points matched by the current image. For example, based on the PnP algorithm, the corrected pose can be calculated through the map points matched by the current image. In addition, the corrected pose can also be optimized through the objective function to minimize the reprojection error. The objective function is as follows:
Figure PCTCN2021112546-appb-000001
Figure PCTCN2021112546-appb-000001
其中,T1表示校正位姿,n i为当前图像的特征点集合F c中的一个,N ij为地图点M j在第i个当前图像上的观测点,π(T1M j)为针孔相机的投影模型,指将一个三维点投影到图像平面上,形成投影点。如此,可对校正位姿的三维位置坐标和姿态信息进行重投影误差校正,提升位姿校正的准确性。 Among them, T1 represents the corrected pose, n i is one of the feature point sets F c of the current image, N ij is the observation point of the map point M j on the ith current image, π(T1M j ) is the pinhole camera The projection model of , refers to projecting a 3D point onto the image plane to form a projection point. In this way, the reprojection error correction can be performed on the three-dimensional position coordinates and attitude information of the corrected pose, and the accuracy of the pose correction can be improved.
然后根据当前位姿和校正位姿的差异,计算补偿参数。例如,建立当前位姿和校准位姿的映射函数,以作为补偿参数。Compensation parameters are then calculated based on the difference between the current pose and the corrected pose. For example, a mapping function between the current pose and the calibration pose is established as a compensation parameter.
然后,处理器10基于补偿参数对当前位姿进行校正,使得当前位姿在预定时长内的位姿均处于校正位姿,预定时长可以是2帧、3帧、4帧、10帧等,每帧的时长可与当前图像或当前位姿的采集帧数对应的每帧时长相同。即使由于GPS信号弱、校正位姿计算失败等导致校正位姿的输出帧率降低,也由于位姿调整是根据补偿参数而进行的,可防止预定时长内多帧位姿的偏差不断累积,位姿调整的准确性较高。Then, the processor 10 corrects the current pose based on the compensation parameters, so that the current pose is in the corrected pose within a predetermined period of time. The predetermined period of time can be 2 frames, 3 frames, 4 frames, 10 frames, etc. The duration of the frame may be the same as the duration of each frame corresponding to the number of acquired frames of the current image or current pose. Even if the output frame rate of the corrected pose decreases due to weak GPS signals, failure to calculate the corrected pose, etc., since the pose adjustment is performed according to the compensation parameters, it can prevent the continuous accumulation of multi-frame pose deviations within a predetermined period of time, and the position The accuracy of posture adjustment is high.
本申请实施例的控制方法、控制装置100和可移动平台1000中,通过获取可移动平台1000采集的当前位姿和当前图像,来计算与当前图像匹配地图点,从而根据地图点计算校正位姿,从而确定补偿参数,相较于直接计算目标位姿,将可移动平台1000调整到目标位姿,易受到目标位姿的输出帧率影响导致位姿调整准确地降低而言,通过补偿参数调整预定时长内的位姿,可防止预定时长内的多帧位姿的偏差不断累积,位姿调整的准确性较高。In the control method, the control device 100 and the movable platform 1000 of the embodiment of the present application, by obtaining the current pose and the current image collected by the movable platform 1000, the map points matching the current image are calculated, so as to calculate the corrected pose according to the map points , so as to determine the compensation parameters. Compared with directly calculating the target pose and adjusting the movable platform 1000 to the target pose, it is easy to be affected by the output frame rate of the target pose and cause the pose adjustment to be accurately reduced. By adjusting the compensation parameters The pose within the predetermined time length can prevent the deviation of multi-frame poses within the predetermined time length from accumulating continuously, and the accuracy of pose adjustment is high.
请参阅图1和3,在一些实施例中,步骤012还包括:Referring to Figures 1 and 3, in some embodiments, step 012 also includes:
0121:获取与当前图像匹配的匹配图像及与匹配图像对应的地图点;0121: Obtain the matching image matching the current image and the map points corresponding to the matching image;
0122:对当前图像和匹配图像进行特征匹配,以确定匹配图像中与当前图像匹配的匹配特征点;0122: Perform feature matching on the current image and the matching image to determine matching feature points in the matching image that match the current image;
0123:获取匹配特征点对应的地图点。0123: Obtain the map points corresponding to the matching feature points.
在一些实施例中,处理器10调用存储器20存储的指令还用于获取与当前图像匹配的匹配图像及与匹配图像对应的地图点;对当前图像和匹配图像进行特征匹配,以确定匹配 图像中与当前图像匹配的匹配特征点;获取匹配特征点对应的地图点。也即是说,步骤0121、步骤0122和步骤0123可以由处理器10实现。In some embodiments, the instructions stored in the memory 20 by the processor 10 are also used to obtain a matching image matching the current image and map points corresponding to the matching image; perform feature matching on the current image and the matching image to determine the matching image in the matching image. The matching feature points that match the current image; get the map points corresponding to the matching feature points. That is to say, step 0121 , step 0122 and step 0123 may be implemented by the processor 10 .
具体地,在获取与当前图像匹配的地图点时,可首先在地图数据库中获取与当前图像匹配的匹配图像,匹配图像包括多个特征点,每个特征点均对应一个地图点,当前图像和匹配图像匹配可以是当前图像和匹配图像的相似度大于预定相似度(如80%、90%等),因此,匹配图像中可能存在与当前图像不匹配的特征点,在对当前图像和匹配图像进行特征点匹配后,即可确定匹配图像中与当前图像匹配的匹配特征点,从而确定匹配特征点对应的地图点。如此,可确定与当前图像匹配的地图点(即匹配特征点对应的地图点),地图点可包括特征点及特征点对应的位姿信息。Specifically, when obtaining a map point matched with the current image, the matching image matched with the current image may first be obtained in the map database, the matching image includes a plurality of feature points, and each feature point corresponds to a map point, the current image and Matching image matching can be that the similarity between the current image and the matching image is greater than a predetermined similarity (such as 80%, 90%, etc.), therefore, there may be feature points that do not match the current image in the matching image. After the feature point matching is performed, the matching feature points in the matching image that match the current image can be determined, so as to determine the map points corresponding to the matching feature points. In this way, the map point matching the current image (ie, the map point corresponding to the matching feature point) can be determined, and the map point can include the feature point and the pose information corresponding to the feature point.
请参阅图1和图4,在某些实施例中,当前位姿包括第一位置信息和第一姿态信息,第一姿态信息包括第一俯仰角、第一横滚角和第一偏航角,校正位姿包括第二位置信息和第二姿态信息,第二姿态信息包括第二俯仰角、第二横滚角和第二偏航角,步骤013包括:Please refer to FIG. 1 and FIG. 4 , in some embodiments, the current pose includes first position information and first attitude information, and the first attitude information includes a first pitch angle, a first roll angle, and a first yaw angle , the correction pose includes the second position information and the second attitude information, the second attitude information includes the second pitch angle, the second roll angle and the second yaw angle, step 013 includes:
0131:将第二俯仰角和第二横滚角分别替换为第一俯仰角和第一横滚角,以生成替换后的第二姿态信息;及0131: Replace the second pitch angle and the second roll angle with the first pitch angle and the first roll angle respectively, so as to generate the replaced second attitude information; and
0132:根据第一位置信息、第一姿态信息、第二位置信息、和替换后的第二姿态信息,计算补偿参数。0132: Calculate compensation parameters according to the first position information, the first attitude information, the second position information, and the replaced second attitude information.
在某些实施例中,处理器10调用存储器20存储的指令还用于将第二俯仰角和第二横滚角分别替换为第一俯仰角和第一横滚角,以生成替换后的第二姿态信息;及根据第一位置信息、第一姿态信息、第二位置信息、和替换后的第二姿态信息,计算补偿参数。也即是说,步骤0131和步骤0132可以由处理器10实现。In some embodiments, the processor 10 invokes the instructions stored in the memory 20 to replace the second pitch angle and the second roll angle with the first pitch angle and the first roll angle, respectively, so as to generate the replaced first 2. Attitude information; and calculating compensation parameters according to the first position information, the first attitude information, the second position information, and the replaced second attitude information. That is to say, step 0131 and step 0132 may be implemented by the processor 10 .
具体地,在计算补偿参数时,考虑到IMU测量的俯仰角和横滚角基本不存在误差,精度较高,偏差的累积主要在三维位置坐标和偏航角,故可将校正位姿中的第二俯仰角和第二横滚角分别替换为IMU当前采集的第一俯仰角和第一横滚角,以生成替换后的第二姿态信息;从而使得无人机校正前后的俯仰角和横滚角基本不变。Specifically, when calculating the compensation parameters, considering that there is basically no error in the pitch angle and roll angle measured by the IMU, the accuracy is high, and the accumulation of deviation is mainly in the three-dimensional position coordinates and yaw angle, so the corrected pose can be The second pitch angle and the second roll angle are respectively replaced by the first pitch angle and the first roll angle collected by the IMU to generate the replaced second attitude information; The roll angle is basically unchanged.
然后根据当前位姿的第一位置信息、第一姿态信息、校正位姿的第二位置信息、及替换后的第二姿态信息,来计算补偿参数。例如,处理器10可根据如下公式计算得到补偿参数:Then, the compensation parameters are calculated according to the first position information of the current pose, the first attitude information, the second position information of the corrected pose, and the replaced second attitude information. For example, the processor 10 can calculate the compensation parameter according to the following formula:
△T=T2*T1 -1;其中,△T表示补偿参数,T1表示当前位姿、T2表示替换后的校正位姿。如此,可快速计算得到补偿参数。补偿参数可包括多个,每个补偿参数对应一个位姿参数,三维位置坐标包括x、y、z三个位置参数,当前姿态包括横滚角(ROLL)、俯仰角(PITCH)和偏航角(YAW)三个姿态参数,也即是说,可通过位置参数对当前位姿中的位置坐标进行校正,通过姿态参数对当前位姿中的姿态进行校正。如此,防止引入PITCH 和ROLL的误差,提升位姿校正的准确性。 △T=T2*T1 -1 ; where, △T represents the compensation parameter, T1 represents the current pose, and T2 represents the corrected pose after replacement. In this way, compensation parameters can be quickly calculated. Compensation parameters can include multiple, each compensation parameter corresponds to a pose parameter, the three-dimensional position coordinates include x, y, z three position parameters, the current attitude includes roll angle (ROLL), pitch angle (PITCH) and yaw angle (YAW) Three attitude parameters, that is to say, the position coordinates in the current pose can be corrected through the position parameters, and the attitude in the current pose can be corrected through the attitude parameters. In this way, the introduction of PITCH and ROLL errors is prevented, and the accuracy of pose correction is improved.
请参阅图1和图5,在某些实施例中,控制方法还包括:Referring to Fig. 1 and Fig. 5, in some embodiments, the control method also includes:
015:根据可移动平台1000采集的多帧采集图像及采集图像对应的位姿信息,建立地图数据库。015: Establish a map database based on the multi-frame acquisition images collected by the movable platform 1000 and the pose information corresponding to the acquired images.
在某些实施例中,处理器10调用存储器20存储的指令还用于根据可移动平台1000采集的多帧采集图像及采集图像对应的位姿信息,建立地图数据库。也即是说,步骤015可以由处理器10实现。In some embodiments, the processor 10 invokes the instructions stored in the memory 20 to establish a map database according to the multi-frame captured images captured by the mobile platform 1000 and the pose information corresponding to the captured images. That is to say, step 015 may be implemented by the processor 10 .
具体地,在进行重复轨迹任务时,需要实现建立重复轨迹的全局地图,以生成地图数据库。无人机在重复轨迹任务对应的路径上移动,并控制相机200不断采集采集图像及位姿信息,采集图像和位姿信息可一一对应,方便后续对地图点的计算。Specifically, when performing repetitive trajectory tasks, it is necessary to realize the establishment of a global map of repetitive trajectories to generate a map database. The UAV moves on the path corresponding to the repeated trajectory task, and controls the camera 200 to continuously collect and collect images and pose information. The collected images and pose information can be in one-to-one correspondence, which is convenient for subsequent calculation of map points.
在无人机从轨迹的起点移动到终点后,处理器10可将移动期间采集的多帧采集图像进行融合,从而生成全局地图,且全局地图的不同区域均存在对应的位姿信息,从而实现全局地图的建立,根据全局地图及全局地图对应的位姿信息即可建立地图数据库。After the UAV moves from the starting point to the end point of the trajectory, the processor 10 can fuse the multi-frame acquisition images collected during the movement to generate a global map, and there are corresponding pose information in different areas of the global map, so as to realize To establish the global map, the map database can be established according to the global map and the pose information corresponding to the global map.
请参阅图1和图6,在某些实施例中,步骤015包括:Referring to Fig. 1 and Fig. 6, in some embodiments, step 015 includes:
0151:获取采集采集图像时,可移动平台1000的位姿信息,以绑定采集图像和位姿信息;0151: Obtain the pose information of the movable platform 1000 when collecting and collecting images, so as to bind the collected images and pose information;
0152:识别采集图像的特征点;0152: Identify the feature points of the collected images;
0153:根据特征点和特征点对应的位姿信息计算地图点;及0153: Calculate the map points according to the feature points and the pose information corresponding to the feature points; and
0154:根据采集图像和地图点建立地图数据库。0154: Build a map database based on collected images and map points.
在某些实施例中,处理器10调用存储器20存储的指令还用于获取采集采集图像时,可移动平台1000的位姿信息,以绑定采集图像和位姿信息;识别采集图像的特征点;根据特征点和特征点对应的位姿信息计算地图点;及根据采集图像和地图点建立地图数据库。也即是说,步骤0151、步骤0152、步骤0153和步骤0154可以由处理器10实现。In some embodiments, the instruction stored in the memory 20 by the processor 10 is also used to obtain the pose information of the movable platform 1000 when the collected image is collected, so as to bind the collected image and pose information; identify the feature points of the collected image ; Calculate the map points according to the feature points and the pose information corresponding to the feature points; and establish a map database according to the collected images and map points. That is to say, step 0151 , step 0152 , step 0153 and step 0154 can be implemented by the processor 10 .
具体地,在建立地图数据库时,可将获取采集图像时,位姿检测装置300检测得到的位姿信息绑定,以实现采集图像和姿态信息的一一对应。Specifically, when establishing the map database, the pose information detected by the pose detection device 300 can be bound when the captured image is acquired, so as to realize the one-to-one correspondence between the collected image and the pose information.
然后处理器10识别采集图像中的特征点,根据特征点和特征点对应的位姿信息计算地图点,例如,处理器10可将连续多帧采集图像对齐,然后根据连续多帧采集图像中表示同一对象的特征点对应的位姿信息来计算特征点对应的地图点,最后将多帧采集图像融合先生成全局地图,并确定全局地图中的每个特征点对应的地图点,将全局地图、特征点、及特征点对应的地图点关联存储,以生成地图数据库。Then the processor 10 identifies the feature points in the collected images, and calculates the map points according to the feature points and the pose information corresponding to the feature points. The pose information corresponding to the feature points of the same object is used to calculate the map points corresponding to the feature points. Finally, the multi-frame acquisition images are fused to generate a global map, and the map points corresponding to each feature point in the global map are determined. The global map, The feature points and map points corresponding to the feature points are associated and stored to generate a map database.
请参阅图1和图7,在某些实施例中,步骤0153包括:Referring to Figure 1 and Figure 7, in some embodiments, step 0153 includes:
01531:确定多帧采集图像中的关键帧图像和非关键帧图像,关键帧图像为多帧采集图 像中的任一帧;01531: Determine the key frame image and non-key frame image in the multi-frame acquisition image, and the key frame image is any frame in the multi-frame acquisition image;
01532:将关键帧图像与非关键帧图像进行特征匹配,以获取关键帧图像中匹配成功次数大于预定次数的第一特征点;01532: Perform feature matching on the key frame image and the non-key frame image, so as to obtain the first feature point in the key frame image whose number of matching successes is greater than a predetermined number;
01533:根据第一特征点和非关键帧图像中与第一特征点匹配成功的第二特征点,计算第三位置信息;及01533: Calculate the third position information according to the first feature point and the second feature point in the non-key frame image that successfully matches the first feature point; and
01534:根据第三位置信息和第一特征点对应的位姿信息,生成地图点。01534: Generate map points according to the third position information and the pose information corresponding to the first feature point.
在某些实施例中,处理器10调用存储器20存储的指令还用于确定多帧采集图像中的关键帧图像和非关键帧图像,关键帧图像为多帧采集图像中的任一帧;将关键帧图像与非关键帧图像进行特征匹配,以获取关键帧图像中匹配成功次数大于预定次数的第一特征点;根据第一特征点和非关键帧图像中与第一特征点匹配成功的第二特征点,计算第三位置信息;及根据第三位置信息和第一特征点对应的位姿信息,生成地图点。也即是说,步骤01531、步骤01532、步骤01533和步骤01534可以由处理器10实现。In some embodiments, the instruction stored in the memory 20 by the processor 10 is also used to determine the key frame image and the non-key frame image in the multi-frame captured image, and the key frame image is any frame in the multi-frame captured image; The key frame image is matched with the non-key frame image to obtain the first feature point whose number of successful matches is greater than a predetermined number of times in the key frame image; according to the first feature point and the first feature point in the non-key frame image second feature points, calculating third position information; and generating map points according to the third position information and pose information corresponding to the first feature points. That is to say, step 01531 , step 01532 , step 01533 and step 01534 can be implemented by the processor 10 .
具体地,在根据特征点对应的位姿信息计算地图点时,由于连续多帧采集图像的相似度较高,存在相同特征点的几率较高,因此,可在连续多帧采集图像中确定关键帧图像和非关键帧图像,关键帧可以是连续多帧采集图像中的任一帧。Specifically, when calculating the map points according to the pose information corresponding to the feature points, due to the high similarity of the continuous multi-frame acquisition images, the probability of the existence of the same feature points is high, therefore, the key points can be determined in the continuous multi-frame acquisition images Frame image and non-key frame image, the key frame can be any frame in the continuous multi-frame acquisition image.
然后将关键帧和非关键帧进行特征匹配,从而在非关键帧中找到与关键帧的特征点匹配的特征点,若关键帧的特征点在预定个数的非关键帧中均存在匹配的特征点,则确定该关键帧的特征点为匹配成功次数大于预定次数的第一特征点,其中,预定个数的数值等于预定次数的数值,预定个数可以是3个、4个、5个等,预定个数越多,计算得到的地图点的准确性越高。Then perform feature matching on the key frame and the non-key frame, so as to find the feature point matching the feature point of the key frame in the non-key frame, if the feature point of the key frame has matching features in a predetermined number of non-key frames point, then it is determined that the feature point of the key frame is the first feature point whose matching success times are greater than the predetermined number of times, wherein the value of the predetermined number is equal to the value of the predetermined number of times, and the predetermined number can be 3, 4, 5, etc. , the more the predetermined number, the higher the accuracy of the calculated map points.
在确定关键帧图像中的多个第一特征点和非关键帧中与第一特征点对应的第二特征点后,即可根据第一特征点和对应的多个第二特征点,计算第一特征点的第三位置信息,例如处理器10可根据PnP算法计算第三位置信息,然后处理器10根据第一特征点对应的第三位置信息和位姿信息,即可生成地图点,可以理解,位姿信息中的三维位置坐标为对应的采集图像的中心的三维位置坐标,而第一特征点可能并不在采集图像的中心,因此,可使用第三位置信息替换位姿信息中的三维位置坐标,从而生成第一特征点对应的地图点。如此,可获取关键帧图像中所有第一特征点对应的地图点。After determining the multiple first feature points in the key frame image and the second feature points corresponding to the first feature points in the non-key frame, the first feature point and the corresponding multiple second feature points can be used to calculate the second feature point. The third position information of a feature point, for example, the processor 10 can calculate the third position information according to the PnP algorithm, and then the processor 10 can generate a map point according to the third position information and pose information corresponding to the first feature point, which can be It is understood that the three-dimensional position coordinates in the pose information are the three-dimensional position coordinates of the center of the corresponding captured image, and the first feature point may not be in the center of the captured image. Therefore, the third position information can be used to replace the three-dimensional position coordinates in the pose information. Position coordinates, so as to generate the map point corresponding to the first feature point. In this way, the map points corresponding to all the first feature points in the key frame image can be obtained.
请参阅图1和图8,在某些实施例中,控制方法还包括:Referring to Figure 1 and Figure 8, in some embodiments, the control method further includes:
016:根据关键帧图像,建立词袋数据库;016: Build a bag-of-words database based on key frame images;
步骤0121包括: Step 0121 includes:
01211:获取词袋数据库中,与当前图像匹配的关键帧图像,以作为匹配图像;及01211: Obtain the key frame image matching the current image in the bag of words database as the matching image; and
01212:获取匹配图像包含的第一特征点对应的地图点。01212: Obtain the map point corresponding to the first feature point included in the matching image.
在某些实施例中,处理器10调用存储器20存储的指令还用于根据关键帧图像,建立词袋数据库;获取词袋数据库中,与当前图像匹配的关键帧图像,以作为匹配图像;及获取匹配图像包含的第一特征点对应的地图点。也即是说,步骤016、步骤01211和步骤01212可以由处理器10实现。In some embodiments, the instruction stored in the memory 20 by the processor 10 is also used to establish a bag-of-words database according to the key-frame image; obtain a key-frame image matching the current image in the bag-of-words database as a matching image; and Obtain the map point corresponding to the first feature point included in the matching image. That is to say, step 016 , step 01211 and step 01212 may be implemented by the processor 10 .
具体地,在获取到关键帧图像及关键帧图像的第一特征点后,可根据关键帧图像建立词袋数据库,以用于与当前图像进行特征匹配。在匹配时,可识别当前图像的特征点,然后在词袋数据库中找到与当前图像的特征点的相似度最高的关键帧图像,以作为匹配图像,然后从地图数据库中获取该匹配图像中的第一特征点对应的地图点,即可获取与当前图像匹配的地图点。如此,通过存储关键帧图像的词袋数据库可快速实现特征匹配,找到与当前图像匹配的关键帧图像,快速获取与当前图像匹配的地图点。Specifically, after the key frame image and the first feature point of the key frame image are acquired, a bag-of-words database may be established according to the key frame image, for feature matching with the current image. When matching, the feature points of the current image can be identified, and then the key frame image with the highest similarity with the feature points of the current image is found in the bag-of-words database as the matching image, and then the key frame image in the matching image is obtained from the map database. The map point corresponding to the first feature point can obtain the map point matching the current image. In this way, the feature matching can be quickly realized by storing the bag-of-words database of the key frame image, the key frame image matching the current image can be found, and the map point matching the current image can be quickly obtained.
在一些实施例中,可对预定时间内获取的关键帧图像和对应的地图点进行优化,具体通过优化函数来实现,如将关键帧图像及对应的地图点输入优化函数,输出以使得优化函数输出值最小的关键帧图像及对应的地图点,从而实现关键帧图像和对应的地图点的优化。优化函数如下:In some embodiments, the key frame images and corresponding map points acquired within a predetermined time can be optimized, specifically through an optimization function, such as inputting key frame images and corresponding map points into the optimization function, and outputting the optimization function The key frame image and the corresponding map point with the minimum value are output, so as to realize the optimization of the key frame image and the corresponding map point. The optimization function is as follows:
Figure PCTCN2021112546-appb-000002
Figure PCTCN2021112546-appb-000002
E ij=||N ij-π(R iM j+t i)|| 2 E ij =||N ij -π(R i M j +t i )|| 2
其中,KFi表示预定时间内的关键帧集合KF set中的一帧,M j为预定时间内的地图点集合M set中的一个,N ij为地图点M j在第i个当前图像上的观测点,π(R iM j+t j)为针孔相机的投影模型,指将一个三维点投影到图像平面上,形成投影点,R i,t i表示将地图点M j从世界坐标系变换到图像坐标系的映射关系,Eij为投影点和观测点之间的二范数误差,
Figure PCTCN2021112546-appb-000003
为优化函数的输出值。
Among them, KFi represents a frame in the key frame set KF set within the predetermined time, M j is one of the map point set M set within the predetermined time, N ij is the observation of the map point M j on the ith current image point, π(R i M j +t j ) is the projection model of the pinhole camera, which refers to projecting a three-dimensional point onto the image plane to form a projection point, R i , t i represent the map point M j from the world coordinate system Transformed to the mapping relationship of the image coordinate system, Eij is the two-norm error between the projection point and the observation point,
Figure PCTCN2021112546-appb-000003
is the output value of the optimization function.
在一些实施例中,无人机建立重复轨迹任务的地图数据库时,轨迹的起点和终点可能是同一位置,而无人机到达终点时,可能与起点的位置存在偏差,因此,当获取到当前关键帧图像时,会将当前关键帧图像与词袋数据库中的已存储的关键帧图像进行特征匹配,从而确定当前关键帧图像和已存储的关键帧图像的相似度,若存在相似度大于相似度阈值(如90%、95%等)的已存储的关键帧图像,则可确定无人机已回到终点,但位置存在一定偏差,因此,处理器10可根据相似度大于相似度阈值的已存储的关键帧图像来对当前关键帧图像进行校正,如根据相似度大于预设相似度阈值的关键帧图像和当前关键帧图像的映射关系,对关键帧图像进行校正,并根据词袋数据库中相似度大于预设相似度阈值的关键帧图像对应的地图点和当前关键帧图像对应的地图点的映射关系,对当前关键帧图像对应的地图点进行校正。例如,将当前关键帧图像使用相似度大于相似度阈值的已存储的关 键帧图像替换,当前关键帧图像对应的地图点使用相似度大于相似度阈值的已存储的关键帧图像对应的地图点替换。In some embodiments, when the UAV builds a map database for repeated trajectory tasks, the starting point and the end point of the trajectory may be at the same location, and when the UAV reaches the end point, there may be a deviation from the starting point. Therefore, when the current When the key frame image is used, the current key frame image will be matched with the stored key frame image in the bag of words database to determine the similarity between the current key frame image and the stored key frame image. If there is a similarity greater than similar degree threshold (such as 90%, 95%, etc.), then it can be determined that the UAV has returned to the end point, but there is a certain deviation in the position. The stored key frame image is used to correct the current key frame image, such as according to the mapping relationship between the key frame image whose similarity is greater than the preset similarity threshold and the current key frame image, the key frame image is corrected, and according to the bag of words database In the mapping relationship between the map point corresponding to the key frame image whose similarity is greater than the preset similarity threshold and the map point corresponding to the current key frame image, the map point corresponding to the current key frame image is corrected. For example, replace the current key frame image with a stored key frame image whose similarity is greater than the similarity threshold, and replace the map point corresponding to the current key frame image with the map point corresponding to the stored key frame image whose similarity is greater than the similarity threshold .
请参阅图9,本申请实施例还提供一种包含计算机程序402的非易失性计算机可读存储介质400,当计算机程序402被一个或多个处理器10执行时,使得处理器10执行上述任一实施例的控制方法。Referring to FIG. 9, the embodiment of the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program 402. When the computer program 402 is executed by one or more processors 10, the processors 10 execute the above-mentioned The control method of any embodiment.
例如,请结合图2,当计算机程序402被一个或多个处理器10执行时,使得处理器10执行以下步骤:For example, referring to FIG. 2, when the computer program 402 is executed by one or more processors 10, the processors 10 are made to perform the following steps:
011:获取可移动平台采集的当前位姿及当前图像;011: Obtain the current pose and current image collected by the movable platform;
012:获取与当前图像匹配的地图点;012: Obtain map points matching the current image;
013:根据当前位姿和校正位姿,生成补偿参数,校正位姿根据地图点确定;及013: Generate compensation parameters based on the current pose and the corrected pose, and the corrected pose is determined based on the map points; and
014:根据补偿参数校正可移动平台在预定时长内的位姿。014: Correct the pose of the movable platform within a predetermined time period according to the compensation parameters.
再例如,请结合图3,当计算机程序402被一个或多个处理器10执行时,使得处理器10执行以下步骤:For another example, please refer to FIG. 3 , when the computer program 402 is executed by one or more processors 10, the processors 10 are made to perform the following steps:
0121:获取与当前图像匹配的匹配图像及与匹配图像对应的地图点;0121: Obtain the matching image matching the current image and the map points corresponding to the matching image;
0122:对当前图像和匹配图像进行特征匹配,以确定匹配图像中与当前图像匹配的匹配特征点;0122: Perform feature matching on the current image and the matching image to determine matching feature points in the matching image that match the current image;
0123:获取匹配特征点对应的地图点。0123: Obtain the map points corresponding to the matching feature points.
可以理解,各个实施例对应的示意图中包含有执行动作的时序时,该时序仅为示例性说明,根据需要,各个执行动作之前的时序可以有变化,同时,各个实施例之间,在不矛盾冲突的情况下,可以结合或拆分为一个或多个实施例,以适应不同的应用场景,此处不做赘述。It can be understood that the schematic diagrams corresponding to the various embodiments include the time sequence of executing actions, which is only an exemplary description. According to needs, the time sequence before each executing action can be changed. At the same time, there is no contradiction between the various embodiments. In the case of conflicts, one or more embodiments may be combined or split to adapt to different application scenarios, and details are not described here.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“一个例子”、“具体示例”、或“一些示例”等的描述意指结合实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "exemplary embodiments", "an example", "specific examples", or "some examples" are meant to be implemented in conjunction with A specific feature, structure, material, or characteristic described by an embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的程序的代码的模块、片段或部分,并且本申请的优选实施例的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood as representing modules, fragments or portions of code comprising one or more procedures for performing specific logical functions or steps of a process, and The scope of the preferred embodiments of the present application includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which should be considered herein Embodiments of the application are understood by those skilled in the art to which they belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执 行逻辑功能的程序的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行***、装置或设备(如基于计算机的***、包括处理器10210的***或其他可以从指令执行***、装置或设备取指令并执行指令的***)使用,或结合这些指令执行***、装置或设备而使用。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, can be considered as a sequential listing of programs for performing logical functions, embodied in any computer-readable medium for Instruction execution systems, devices, or devices (such as computer-based systems, systems including processor 10210, or other systems that can fetch instructions from instruction execution systems, devices, or devices and execute instructions), or in combination with these instruction execution systems, devices, or equipment for use.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limitations on the present application, and those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.

Claims (29)

  1. 一种控制方法,其特征在于,包括:A control method, characterized in that, comprising:
    获取可移动平台采集的当前位姿及当前图像;Obtain the current pose and current image collected by the movable platform;
    获取与所述当前图像匹配的地图点;Obtain map points matching the current image;
    根据所述当前位姿和校正位姿,生成补偿参数,所述校正位姿根据所述地图点确定;及generating compensation parameters according to the current pose and a corrected pose, the corrected pose being determined according to the map points; and
    根据所述补偿参数校正所述可移动平台在预定时长内的位姿。Correcting the pose of the movable platform within a predetermined time period according to the compensation parameters.
  2. 根据权利要求1所述的控制方法,其特征在于,所述获取与所述当前图像匹配的地图点,包括:The control method according to claim 1, wherein said acquiring map points matching said current image comprises:
    获取与所述当前图像匹配的匹配图像及与所述匹配图像对应的地图点;Acquiring a matching image matching the current image and map points corresponding to the matching image;
    对所述当前图像和所述匹配图像进行特征匹配,以确定所述匹配图像中与所述当前图像匹配的匹配特征点;performing feature matching on the current image and the matching image to determine matching feature points in the matching image that match the current image;
    获取所述匹配特征点对应的所述地图点。Obtain the map point corresponding to the matching feature point.
  3. 根据权利要求1所述的控制方法,其特征在于,所述当前位姿包括第一位置信息和第一姿态信息,第一姿态信息包括第一俯仰角、第一横滚角和第一偏航角,所述校正位姿包括第二位置信息和第二姿态信息,所述第二姿态信息包括第二俯仰角、第二横滚角和第二偏航角,所述根据所述当前位姿和所述地图点计算得到的校正位姿,生成补偿参数,包括:The control method according to claim 1, wherein the current pose includes first position information and first attitude information, and the first attitude information includes a first pitch angle, a first roll angle, and a first yaw Angle, the correction pose includes the second position information and the second attitude information, the second attitude information includes the second pitch angle, the second roll angle and the second yaw angle, according to the current pose and the corrected pose calculated by the map points to generate compensation parameters, including:
    将所述第二俯仰角和第二横滚角分别替换为所述第一俯仰角和所述第一横滚角,以生成替换后的第二姿态信息;及replacing the second pitch angle and the second roll angle with the first pitch angle and the first roll angle, respectively, to generate replaced second attitude information; and
    根据所述第一位置信息、第一姿态信息、第二位置信息、和替换后的第二姿态信息,计算所述补偿参数。The compensation parameter is calculated according to the first position information, the first attitude information, the second position information, and the replaced second attitude information.
  4. 根据权利要求2所述的控制方法,其特征在于,还包括:The control method according to claim 2, further comprising:
    根据所述可移动平台采集的多帧采集图像及所述采集图像对应的位姿信息,建立所述地图数据库。The map database is established according to the multi-frame acquisition images acquired by the movable platform and the pose information corresponding to the acquisition images.
  5. 根据权利要求4所述的控制方法,其特征在于,所述根据所述可移动平台采集的多帧采集图像及所述采集图像对应的位姿信息,建立所述地图数据库,包括:The control method according to claim 4, wherein the establishment of the map database according to the multi-frame acquisition images collected by the movable platform and the pose information corresponding to the acquisition images includes:
    获取采集所述采集图像时,所述可移动平台的位姿信息,以绑定所述采集图像和所述 位姿信息;Obtaining the pose information of the movable platform when the collected image is collected, so as to bind the collected image and the pose information;
    识别所述采集图像的特征点;identifying feature points of the collected image;
    根据所述特征点和所述特征点对应的所述位姿信息计算所述地图点;及calculating the map point according to the feature point and the pose information corresponding to the feature point; and
    根据所述采集图像和所述地图点建立所述地图数据库。The map database is established according to the collected images and the map points.
  6. 根据权利要求5所述的控制方法,其特征在于,所述根据所述特征点和所述特征点对应的所述位姿信息计算所述地图点,包括:The control method according to claim 5, wherein the calculating the map point according to the feature point and the pose information corresponding to the feature point comprises:
    确定多帧所述采集图像中的关键帧图像和非关键帧图像,所述关键帧图像为多帧所述采集图像中的任一帧;Determining key frame images and non-key frame images in the multiple frames of the collected images, where the key frame image is any frame of the multiple frames of the collected images;
    将所述关键帧图像与所述非关键帧图像进行特征匹配,以获取所述关键帧图像中匹配成功次数大于预定次数的第一特征点;performing feature matching on the key frame image and the non-key frame image, to obtain a first feature point in the key frame image whose number of matching successes is greater than a predetermined number;
    根据所述第一特征点和所述非关键帧图像中与所述第一特征点匹配成功的第二特征点,计算第三位置信息;及calculating third position information according to the first feature point and the second feature point in the non-key frame image that successfully matches the first feature point; and
    根据所述第三位置信息和所述第一特征点对应的所述位姿信息,生成所述地图点。The map point is generated according to the third position information and the pose information corresponding to the first feature point.
  7. 根据权利要求6所述的控制方法,其特征在于,还包括:The control method according to claim 6, further comprising:
    根据所述关键帧图像,建立词袋数据库;According to the key frame image, a bag-of-words database is established;
    所述获取与所述当前图像匹配的匹配图像及与所述匹配图像对应的地图点,包括:The acquisition of the matching image matching the current image and the map points corresponding to the matching image includes:
    获取所述词袋数据库中,与所述当前图像匹配的所述关键帧图像,以作为所述匹配图像;及Obtaining the key frame image matching the current image in the bag-of-words database as the matching image; and
    获取所述匹配图像包含的所述第一特征点对应的所述地图点。The map point corresponding to the first feature point included in the matching image is acquired.
  8. 根据权利要求7所述的控制方法,其特征在于,还包括:The control method according to claim 7, further comprising:
    在获取到当前关键帧图像后,判断所述词袋数据库中是否存在相似度大于预设相似度阈值的所述关键帧图像;及After acquiring the current key frame image, determine whether there is the key frame image in the bag-of-words database whose similarity is greater than a preset similarity threshold; and
    若是,则根据所述词袋数据库中相似度大于预设相似度阈值的所述关键帧图像和所述当前关键帧图像的映射关系,对所述当前关键帧图像进行校正,并根据所述词袋数据库中相似度大于预设相似度阈值的所述关键帧图像对应的所述地图点和所述当前关键帧图像对应的所述地图点的映射关系,对所述当前关键帧图像对应的所述地图点进行校正。If so, correct the current key frame image according to the mapping relationship between the key frame image whose similarity is greater than the preset similarity threshold in the bag of words database and the current key frame image, and correct the current key frame image according to the word The mapping relationship between the map point corresponding to the key frame image whose similarity is greater than the preset similarity threshold in the bag database and the map point corresponding to the current key frame image, for all the map points corresponding to the current key frame image The above map points are corrected.
  9. 根据权利要求6所述的控制方法,其特征在于,还包括:The control method according to claim 6, further comprising:
    将预设时长内采集的所述关键帧图像及所述关键帧图像对应的所述地图点输入到预设 的优化函数,以输出使得所述优化函数的输出值最小的所述关键帧图像及所述关键帧图像对应的所述地图点。Inputting the key frame image and the map point corresponding to the key frame image collected within a preset time period into a preset optimization function, so as to output the key frame image and the key frame image that minimize the output value of the optimization function The map point corresponding to the key frame image.
  10. 一种控制装置,其特征在于,应用于可移动平台,所述控制装置包括处理器和存储器,所述存储器用于存储指令,所述处理器调用所述存储器存储的指令用于实现以下操作:A control device, characterized in that it is applied to a mobile platform, the control device includes a processor and a memory, the memory is used to store instructions, and the processor calls the instructions stored in the memory to implement the following operations:
    获取所述可移动平台采集的当前位姿及当前图像;Obtain the current pose and current image collected by the movable platform;
    获取与所述当前图像匹配的地图点;根据所述当前位姿和所述地图点计算得到的校正位姿,生成补偿参数;及Acquiring map points matching the current image; generating compensation parameters according to the current pose and the corrected pose calculated by the map point; and
    根据所述补偿参数校正所述可移动平台在预定时长内的所述当前位姿。Correcting the current pose of the movable platform within a predetermined time period according to the compensation parameter.
  11. 根据权利要求10所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:获取与所述当前图像匹配的匹配图像及与所述匹配图像对应的地图点;对所述当前图像和所述匹配图像进行特征匹配,以确定所述匹配图像中与所述当前图像匹配的匹配特征点;获取所述匹配特征点对应的所述地图点。The control device according to claim 10, wherein the processor calling the instructions stored in the memory is also used to implement the following operations: acquiring a matching image matching the current image and an image corresponding to the matching image map points; perform feature matching on the current image and the matching image to determine matching feature points in the matching image that match the current image; obtain the map points corresponding to the matching feature points.
  12. 根据权利要求10所述的控制装置,其特征在于,所述当前位姿包括第一位置信息和第一姿态信息,第一姿态信息包括第一俯仰角、第一横滚角和第一偏航角,所述校正位姿包括第二位置信息和第二姿态信息,所述第二姿态信息包括第二俯仰角、第二横滚角和第二偏航角,所述处理器还用于将所述第二俯仰角和第二横滚角分别替换为所述第一俯仰角和所述第一横滚角,以生成替换后的第二姿态信息;及根据所述第一位置信息、第一姿态信息、第二位置信息、和替换后的第二姿态信息,计算所述补偿参数。The control device according to claim 10, wherein the current pose includes first position information and first attitude information, and the first attitude information includes a first pitch angle, a first roll angle, and a first yaw angle, the corrected pose includes second position information and second attitude information, the second attitude information includes a second pitch angle, a second roll angle, and a second yaw angle, and the processor is also used to replacing the second pitch angle and the second roll angle with the first pitch angle and the first roll angle, respectively, to generate replaced second attitude information; and according to the first position information, the second A posture information, a second position information, and a replaced second posture information, and the compensation parameter is calculated.
  13. 根据权利要求11所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:根据所述可移动平台采集的多帧采集图像及所述采集图像对应的位姿信息,建立所述地图数据库。The control device according to claim 11, wherein the processor calls the instructions stored in the memory to implement the following operations: according to the multi-frame acquisition image acquired by the movable platform and the corresponding acquisition image pose information to build the map database.
  14. 根据权利要求13所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:获取采集所述采集图像时,所述可移动平台的位姿信息,以绑定所述采集图像和所述位姿信息;识别所述采集图像的特征点;根据所述特征点和所述特征点对应的所述位姿信息计算所述地图点;及根据所述采集图像和所述地图点建立所述地图数据库。The control device according to claim 13, wherein the processor calls the instructions stored in the memory to implement the following operations: acquire the pose information of the movable platform when acquiring the acquired image, To bind the collected image and the pose information; identify the feature points of the collected image; calculate the map points according to the feature points and the pose information corresponding to the feature points; and according to the Collecting images and the map points to establish the map database.
  15. 根据权利要求14所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:确定多帧所述采集图像中的关键帧图像和非关键帧图像,所述关键帧图像为多帧所述采集图像中的任一帧;将所述关键帧图像与所述非关键帧图像进行特征匹配,以获取所述关键帧图像中匹配成功次数大于预定次数的第一特征点;根据所述第一特征点和所述非关键帧图像中与所述第一特征点匹配成功的第二特征点,计算第三位置信息;及根据所述第三位置信息和所述第一特征点对应的所述位姿信息,生成所述地图点。The control device according to claim 14, wherein the instruction stored in the memory by the processor is further used to implement the following operations: determining key frame images and non-key frame images in the multiple frames of the collected images, The key frame image is any frame in the multiple frames of the collected images; performing feature matching on the key frame image and the non-key frame image, to obtain the key frame image whose number of matching successes is greater than a predetermined number of times The first feature point; according to the first feature point and the second feature point in the non-key frame image that successfully matches the first feature point, calculate third position information; and according to the third position information and The pose information corresponding to the first feature point is used to generate the map point.
  16. 根据权利要求15所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:根据所述关键帧图像,建立词袋数据库;获取所述词袋数据库中,与所述当前图像匹配的所述关键帧图像,以作为所述匹配图像;及获取所述匹配图像包含的所述第一特征点对应的所述地图点。The control device according to claim 15, wherein the processor calls the instructions stored in the memory to implement the following operations: establish a bag-of-words database according to the key frame image; acquire the bag-of-words database wherein, the key frame image matched with the current image is used as the matching image; and the map point corresponding to the first feature point included in the matching image is acquired.
  17. 根据权利要求16所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:在获取到当前关键帧图像后,判断所述词袋数据库中是否存在相似度大于预设相似度阈值的所述关键帧图像;及在存在相似度大于所述预设相似度阈值的所述关键帧图像时,则根据所述词袋数据库中相似度大于所述预设相似度阈值的所述关键帧图像和所述当前关键帧图像的映射关系,对所述当前关键帧图像进行校正,并根据所述词袋数据库中相似度大于所述预设相似度阈值的所述关键帧图像对应的所述地图点和所述当前关键帧图像对应的所述地图点的映射关系,对所述当前关键帧图像对应的所述地图点进行校正。The control device according to claim 16, wherein the processor calls the instructions stored in the memory to implement the following operations: after acquiring the current key frame image, it is judged whether there is the key frame image whose similarity is greater than the preset similarity threshold; and when there is the key frame image whose similarity is greater than the preset similarity threshold, according to the similarity in the bag of words database Setting the mapping relationship between the key frame image of the similarity threshold and the current key frame image, correcting the current key frame image, and according to the similarity in the bag of words database greater than the preset similarity threshold The mapping relationship between the map point corresponding to the key frame image and the map point corresponding to the current key frame image is to correct the map point corresponding to the current key frame image.
  18. 根据权利要求15所述的控制装置,其特征在于,所述处理器调用所述存储器存储的指令还用于实现以下操作:将预设时长内采集的所述关键帧图像及所述关键帧图像对应的所述地图点输入到预设的优化函数,以输出使得所述优化函数的输出值最小的所述关键帧图像及所述关键帧图像对应的所述地图点。The control device according to claim 15, wherein the processor calls the instructions stored in the memory to implement the following operations: the key frame image and the key frame image collected within a preset time period The corresponding map points are input to a preset optimization function, so as to output the key frame image and the map point corresponding to the key frame image that minimize the output value of the optimization function.
  19. 一种可移动平台,其特征在于,所述可移动平台包括相机、位姿检测装置和控制装置,所述相机用于采集当前图像,所述位姿检测装置用于采集所述可移动平台的当前姿态,所述控制装置包括处理器,所述处理器用于获取所述当前位姿及所述当前图像;获取与所述当前图像匹配的地图点;根据所述当前位姿和所述地图点计算得到的校正位姿,生 成补偿参数;及根据所述补偿参数校正所述可移动平台在预定时长内的所述当前位姿。A movable platform, characterized in that the movable platform includes a camera, a pose detection device and a control device, the camera is used to collect a current image, and the pose detection device is used to collect the image of the movable platform The current pose, the control device includes a processor, the processor is used to obtain the current pose and the current image; obtain a map point matching the current image; according to the current pose and the map point calculating the corrected pose to generate compensation parameters; and correcting the current pose of the movable platform within a predetermined time period according to the compensation parameters.
  20. 根据权利要求19所述的可移动平台,其特征在于,所述处理器还用于获取与所述当前图像匹配的匹配图像及与所述匹配图像对应的地图点;对所述当前图像和所述匹配图像进行特征匹配,以确定所述匹配图像中与所述当前图像匹配的匹配特征点;获取所述匹配特征点对应的所述地图点。The mobile platform according to claim 19, wherein the processor is further configured to acquire a matching image matching the current image and a map point corresponding to the matching image; Perform feature matching on the matching image to determine matching feature points in the matching image that match the current image; acquire the map point corresponding to the matching feature point.
  21. 根据权利要求20所述的可移动平台,其特征在于,所述当前位姿包括第一位置信息和第一姿态信息,第一姿态信息包括第一俯仰角、第一横滚角和第一偏航角,所述校正位姿包括第二位置信息和第二姿态信息,所述第二姿态信息包括第二俯仰角、第二横滚角和第二偏航角,所述处理器还用于将所述第二俯仰角和第二横滚角分别替换为所述第一俯仰角和所述第一横滚角,以生成替换后的第二姿态信息;及根据所述第一位置信息、第一姿态信息、第二位置信息、和替换后的第二姿态信息,计算所述补偿参数。The movable platform according to claim 20, wherein the current pose includes first position information and first attitude information, and the first attitude information includes a first pitch angle, a first roll angle, and a first yaw angle. Navigation angle, the corrected pose includes second position information and second attitude information, the second attitude information includes a second pitch angle, a second roll angle, and a second yaw angle, and the processor is also used to replacing the second pitch angle and the second roll angle with the first pitch angle and the first roll angle, respectively, to generate replaced second attitude information; and according to the first position information, The first posture information, the second position information, and the replaced second posture information, and the compensation parameters are calculated.
  22. 根据权利要求20所述的可移动平台,其特征在于,所述处理器还用于根据所述可移动平台采集的多帧采集图像及所述采集图像对应的位姿信息,建立所述地图数据库。The movable platform according to claim 20, wherein the processor is further configured to establish the map database according to the multi-frame acquired images collected by the movable platform and the pose information corresponding to the acquired images .
  23. 根据权利要求22所述的可移动平台,其特征在于,所述处理器用于获取采集所述采集图像时,所述可移动平台的位姿信息,以绑定所述采集图像和所述位姿信息;识别所述采集图像的特征点;根据所述特征点和所述特征点对应的所述位姿信息计算所述地图点;及根据所述采集图像和所述地图点建立所述地图数据库。The movable platform according to claim 22, wherein the processor is configured to obtain pose information of the movable platform when the collected image is collected, so as to bind the collected image and the pose information; identifying feature points of the collected images; calculating the map points according to the feature points and the pose information corresponding to the feature points; and establishing the map database according to the collected images and the map points .
  24. 根据权利要求23所述的可移动平台,其特征在于,所述处理器还用于确定多帧所述采集图像中的关键帧图像和非关键帧图像,所述关键帧图像为多帧所述采集图像中的任一帧;将所述关键帧图像与所述非关键帧图像进行特征匹配,以获取所述关键帧图像中匹配成功次数大于预定次数的第一特征点;根据所述第一特征点和所述非关键帧图像中与所述第一特征点匹配成功的第二特征点,计算第三位置信息;及根据所述第三位置信息和所述第一特征点对应的所述位姿信息,生成所述地图点。The mobile platform according to claim 23, wherein the processor is further configured to determine key frame images and non-key frame images in the multiple frames of the captured images, and the key frame images are the multiple frames of the captured images. Acquiring any frame in the image; performing feature matching on the key frame image and the non-key frame image, so as to obtain a first feature point in the key frame image whose number of matching successes is greater than a predetermined number of times; according to the first The feature point and the second feature point in the non-key frame image that successfully matches the first feature point calculate third position information; and according to the third position information and the first feature point corresponding to the pose information to generate the map points.
  25. 根据权利要求24所述的可移动平台,其特征在于,所述处理器还用于根据所述关键帧图像,建立词袋数据库;获取所述词袋数据库中,与所述当前图像匹配的所述关键帧图像,以作为所述匹配图像;及获取所述匹配图像包含的所述第一特征点对应的所述地图 点。The mobile platform according to claim 24, wherein the processor is further configured to establish a bag-of-words database according to the key-frame image; and obtain all words that match the current image in the bag-of-words database. The key frame image is used as the matching image; and the map point corresponding to the first feature point included in the matching image is acquired.
  26. 根据权利要求25所述的可移动平台,其特征在于,所述处理器还用于在获取到当前关键帧图像后,判断所述词袋数据库中是否存在相似度大于预设相似度阈值的所述关键帧图像;及在存在相似度大于所述预设相似度阈值的所述关键帧图像时,则根据所述词袋数据库中相似度大于所述预设相似度阈值的所述关键帧图像和所述当前关键帧图像的映射关系,对所述当前关键帧图像进行校正,并根据所述词袋数据库中相似度大于所述预设相似度阈值的所述关键帧图像对应的所述地图点和所述当前关键帧图像对应的所述地图点的映射关系,对所述当前关键帧图像对应的所述地图点进行校正。The mobile platform according to claim 25, wherein the processor is further configured to, after acquiring the current key frame image, determine whether there are any words whose similarity is greater than a preset similarity threshold in the bag-of-words database. the key frame image; and when there is the key frame image whose similarity is greater than the preset similarity threshold, according to the key frame image in the bag of words database whose similarity is greater than the preset similarity threshold and the mapping relationship between the current key frame image, correcting the current key frame image, and according to the map corresponding to the key frame image in the bag of words database whose similarity is greater than the preset similarity threshold point and the map point corresponding to the current key frame image, and correct the map point corresponding to the current key frame image.
  27. 根据权利要求24所述的可移动平台,其特征在于,所述处理器还用于将预设时长内采集的所述关键帧图像及所述关键帧图像对应的所述地图点输入到预设的优化函数,以输出使得所述优化函数的输出值最小的所述关键帧图像及所述关键帧图像对应的所述地图点。The mobile platform according to claim 24, wherein the processor is further configured to input the key frame images collected within a preset time period and the map points corresponding to the key frame images into a preset The optimization function is used to output the key frame image and the map point corresponding to the key frame image that minimize the output value of the optimization function.
  28. 根据权利要求19所述的可移动平台,其特征在于,所述可移动平台包括无人机、无人驾驶车辆、或地面遥控机器人。The mobile platform according to claim 19, wherein the mobile platform comprises a drone, an unmanned vehicle, or a ground remote-controlled robot.
  29. 一种包含计算机程序的非易失性计算机可读存储介质,当所述计算机程序被一个或多个处理器执行时,使得所述处理器执行如权利要求1至9中任一项所述的控制方法。A non-transitory computer-readable storage medium containing a computer program that, when the computer program is executed by one or more processors, causes the processors to perform the method described in any one of claims 1 to 9 Control Method.
PCT/CN2021/112546 2021-08-13 2021-08-13 Control method, control device, movable platform, and storage medium WO2023015566A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180006263.1A CN114730471A (en) 2021-08-13 2021-08-13 Control method, control device, movable platform and storage medium
PCT/CN2021/112546 WO2023015566A1 (en) 2021-08-13 2021-08-13 Control method, control device, movable platform, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/112546 WO2023015566A1 (en) 2021-08-13 2021-08-13 Control method, control device, movable platform, and storage medium

Publications (1)

Publication Number Publication Date
WO2023015566A1 true WO2023015566A1 (en) 2023-02-16

Family

ID=82235527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112546 WO2023015566A1 (en) 2021-08-13 2021-08-13 Control method, control device, movable platform, and storage medium

Country Status (2)

Country Link
CN (1) CN114730471A (en)
WO (1) WO2023015566A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861428A (en) * 2023-02-27 2023-03-28 广东粤港澳大湾区硬科技创新研究院 Pose measuring method, device, equipment and storage medium
CN116659529A (en) * 2023-05-26 2023-08-29 小米汽车科技有限公司 Data detection method, device, vehicle and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661376B (en) * 2022-12-28 2023-04-07 深圳市安泽拉科技有限公司 Target reconstruction method and system based on unmanned aerial vehicle image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007258989A (en) * 2006-03-22 2007-10-04 Eastman Kodak Co Digital camera, composition corrector, and composition correcting method
CN105094138A (en) * 2015-07-15 2015-11-25 东北农业大学 Low-altitude autonomous navigation system for rotary-wing unmanned plane
CN105447853A (en) * 2015-11-13 2016-03-30 深圳市道通智能航空技术有限公司 Flight device, flight control system and flight control method
CN106017463A (en) * 2016-05-26 2016-10-12 浙江大学 Aircraft positioning method based on positioning and sensing device
WO2019084804A1 (en) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 Visual odometry and implementation method therefor
CN111462135A (en) * 2020-03-31 2020-07-28 华东理工大学 Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation
CN112950719A (en) * 2021-01-23 2021-06-11 西北工业大学 Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform
CN113120247A (en) * 2019-12-30 2021-07-16 广州科易光电技术有限公司 Cloud deck, cloud deck control method, unmanned aerial vehicle, control system and control method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007258989A (en) * 2006-03-22 2007-10-04 Eastman Kodak Co Digital camera, composition corrector, and composition correcting method
CN105094138A (en) * 2015-07-15 2015-11-25 东北农业大学 Low-altitude autonomous navigation system for rotary-wing unmanned plane
CN105447853A (en) * 2015-11-13 2016-03-30 深圳市道通智能航空技术有限公司 Flight device, flight control system and flight control method
CN106017463A (en) * 2016-05-26 2016-10-12 浙江大学 Aircraft positioning method based on positioning and sensing device
WO2019084804A1 (en) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 Visual odometry and implementation method therefor
CN113120247A (en) * 2019-12-30 2021-07-16 广州科易光电技术有限公司 Cloud deck, cloud deck control method, unmanned aerial vehicle, control system and control method thereof
CN111462135A (en) * 2020-03-31 2020-07-28 华东理工大学 Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation
CN112950719A (en) * 2021-01-23 2021-06-11 西北工业大学 Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861428A (en) * 2023-02-27 2023-03-28 广东粤港澳大湾区硬科技创新研究院 Pose measuring method, device, equipment and storage medium
CN115861428B (en) * 2023-02-27 2023-07-14 广东粤港澳大湾区硬科技创新研究院 Pose measurement method and device, terminal equipment and storage medium
CN116659529A (en) * 2023-05-26 2023-08-29 小米汽车科技有限公司 Data detection method, device, vehicle and storage medium
CN116659529B (en) * 2023-05-26 2024-02-06 小米汽车科技有限公司 Data detection method, device, vehicle and storage medium

Also Published As

Publication number Publication date
CN114730471A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
WO2023015566A1 (en) Control method, control device, movable platform, and storage medium
CN111442722B (en) Positioning method, positioning device, storage medium and electronic equipment
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
US11073389B2 (en) Hover control
US10928838B2 (en) Method and device of determining position of target, tracking device and tracking system
WO2020211812A1 (en) Aircraft landing method and apparatus
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
CN108563235B (en) Multi-rotor unmanned aerial vehicle, method, device and equipment for grabbing target object
WO2015105597A2 (en) Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (mav)
WO2019080113A1 (en) Patrol planning method for unmanned aerial vehicle, control terminal, unmanned aerial vehicle, and unmanned aerial vehicle system
JP7231996B2 (en) Information processing method and information processing system
KR102263560B1 (en) System for setting ground control points using cluster RTK drones
US20220084415A1 (en) Flight planning method and related apparatus
US20210201534A1 (en) Method and device for parameter processing for camera and image processing device
CN111123964A (en) Unmanned aerial vehicle landing method and device and computer readable medium
CN112985391A (en) Multi-unmanned aerial vehicle collaborative navigation method and device based on inertia and binocular vision
CN114488183A (en) Obstacle point cloud processing method, device and equipment and readable storage medium
CN111189466A (en) Robot positioning position optimization method, electronic device, and storage medium
US20210270611A1 (en) Navigation apparatus, navigation parameter calculation method, and medium
CN116922387B (en) Real-time control method and system for photographic robot
JP2020107938A (en) Camera calibration device, camera calibration method, and program
CN112050814A (en) Unmanned aerial vehicle visual navigation system and method for indoor transformer substation
CN116681733A (en) Near-distance real-time pose tracking method for space non-cooperative target
CN108733076B (en) Method and device for grabbing target object by unmanned aerial vehicle and electronic equipment
WO2020154937A1 (en) Method and device for controlling loads, and control apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21953179

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE