WO2018195857A1 - 无人机的控制方法、设备及障碍物的提示方法、设备 - Google Patents

无人机的控制方法、设备及障碍物的提示方法、设备 Download PDF

Info

Publication number
WO2018195857A1
WO2018195857A1 PCT/CN2017/082190 CN2017082190W WO2018195857A1 WO 2018195857 A1 WO2018195857 A1 WO 2018195857A1 CN 2017082190 W CN2017082190 W CN 2017082190W WO 2018195857 A1 WO2018195857 A1 WO 2018195857A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
information
drone
map
depth data
Prior art date
Application number
PCT/CN2017/082190
Other languages
English (en)
French (fr)
Inventor
张伟
刘昂
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/082190 priority Critical patent/WO2018195857A1/zh
Priority to CN201780005315.7A priority patent/CN108521807B/zh
Priority to CN202210263027.4A priority patent/CN114815863A/zh
Publication of WO2018195857A1 publication Critical patent/WO2018195857A1/zh
Priority to US16/663,902 priority patent/US11797028B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • G05D1/1064Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones specially adapted for avoiding collisions with other aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • G08G5/045Navigation or guidance aids, e.g. determination of anti-collision manoeuvers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • B64U10/13Flying platforms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the invention relates to the technical field of drones, in particular to a control method and device for a drone, a method and a device for prompting an obstacle, and a drone.
  • the existing drones In order to improve the safety performance of the drone during flight, the existing drones generally incorporate obstacle avoidance functions, which not only protects the safety of the drone, but also ensures the safety of ground personnel or objects.
  • the intelligent control system controls the drone to perform corresponding obstacle avoidance operations such as flying or hovering according to the information acquired by the sensor, thereby preventing the drone from hitting an obstacle.
  • the operation command input by the user through the control terminal is blocked by the intelligent control system. Since the interactive device on the control terminal is not clear, it is easy for the user to generate the drone itself. The illusion of failure or loss of control, so that the user can not understand the current operating state of the drone. The lack of effective obstacle reminder methods can reduce the usefulness of drones in certain situations.
  • an object of the present invention is to provide a control method and device for a drone, a method for prompting an obstacle, a device, and a drone, so that the user can know the flight state of the drone at present.
  • a first aspect of the embodiments of the present invention provides a method for controlling a drone, including:
  • the information of the obstacle is transmitted to the control terminal of the drone.
  • a second aspect of the embodiments of the present invention provides a method for prompting an obstacle, including:
  • a third aspect of the embodiments of the present invention provides a control device for a drone, including:
  • a depth sensor that acquires depth data of obstacles in the flight space
  • the processor is configured to: determine, according to the depth data, information of an obstacle that triggers the obstacle avoidance operation, and send the information of the obstacle to the control terminal of the drone.
  • a fourth aspect of the embodiments of the present invention provides a prompting device for an obstacle, including:
  • a communication interface configured to receive a real-time image captured by a photographing device on the drone, and receive information of an obstacle that is triggered by the drone to trigger an obstacle avoidance operation;
  • a processor configured to: map the information of the obstacle to the real-time image displayed by the interaction device.
  • the first aspect of the embodiments of the present invention provides a drone, including:
  • a control device for a drone as described in the third aspect is described in the third aspect.
  • the information of the obstacle corresponding to the obstacle avoidance operation is determined according to the depth data of the obstacle in the flight space. And transmitting the information of the obstacle determined above to the control terminal of the drone, and the control terminal maps the information of the obstacle to the real-time image displayed by the interaction device when receiving the information of the obstacle that triggers the obstacle avoidance operation.
  • the user can view the real-time image of the information mapped with the obstacle on the interactive device, that is, the current drone operation is being performed without obstacles.
  • the UAV itself is faulty, and the user can know the information of the obstacles that trigger the obstacle avoidance operation, so that the user can know the flight status of the UAV in time, and provide the user with a good obstacle reminding mode.
  • FIG. 1 is a flowchart of a method for controlling a drone according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a flight control process of a drone in a flight space according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a specific control method of a drone according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for prompting an obstacle according to an embodiment of the present invention.
  • FIG. 9 are schematic diagrams showing real-time image display effects disclosed in an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a control device of a drone according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a prompting device for an obstacle according to an embodiment of the present invention.
  • the embodiment of the invention discloses a control method for a drone. Referring to FIG. 1, the method includes:
  • Step S101 Acquire depth data of obstacles in the flight space.
  • the control terminal 204 controls the drone to fly in the flight space
  • the depth sensor 201 is mounted on the drone.
  • the depth sensor 201 can be installed on the nose of the drone or installed in the drone.
  • the depth sensor 201 is installed in the fuselage of the drone as shown in Figure 2 for illustrative purposes only when the drone is in When flying in the flight space, the depth sensor 201 on the drone acquires the depth data of the obstacles 205, 206 in the flight space, and the depth image can be acquired according to the depth data.
  • the depth sensor for obtaining the depth data is all sensors that can acquire depth data, and the depth sensor may be specifically a TOF camera, an RGB camera, a binocular camera, a monocular camera, a laser radar, or the like.
  • the point cloud information of the obstacles 205, 206 can be acquired by corresponding conversion, so that the obstacle 205 in the current flight space can be known by the point cloud information, 206 location information, contour information, size information, and the like.
  • the payload 203 is configured on the drone, wherein the payload 203 is connected to the body of the drone through the carrier 202, wherein the payload may be a photographing device, and the carrier 202 may be a component for stabilizing the photographing device, such as a cloud. station.
  • the photographing device can photograph the object in the flight space, and the drone can transmit the real-time image captured by the photographing device to the control terminal 204 of the drone through the downlink data link of the drone, and the control terminal 204 can be configured.
  • the control terminal in this embodiment may be a laptop computer, a tablet computer, a smart phone, a wearable device (watch, a wristband), a ground control station, and the like, and combinations thereof.
  • the interactive device may include a display (eg, a touch display), a smart phone, a tablet, and the like.
  • Step S102 Determine, according to the depth data, information of an obstacle that triggers an obstacle avoidance operation.
  • the depth data acquired by the depth sensor 201 can reflect the distance status information between different obstacles 205, 206 and the drone in the current flight space, during the flight of the drone.
  • the obstacle 205 triggers the obstacle avoidance operation of the drone.
  • the drone will fly around the obstacle 205, or in the obstacle.
  • the object 205 is hovered in front of the object.
  • the drone determines the information of the obstacle 205 that triggers the obstacle avoidance operation based on the acquired depth data.
  • the information of the obstacle 205 may include location information of the obstacle 205, depth data of the obstacle 205, and classification mark of the obstacle 205 (the part of the article will be described later in the text, which will not be described here). At least one of them.
  • Step S103 Send the information of the obstacle to the control terminal of the drone.
  • the drone when determining the information of the obstacle 205 that triggers the obstacle avoidance operation of the drone, the drone can transmit the information of the obstacle 205 to the control terminal 204 through the downlink data link, when the control terminal 204 After obtaining the information of the obstacle 205, the display content and/or the display manner of the interactive device on the control terminal can be adjusted and controlled according to the information of the obstacle 205, so that the adjusted display content and/or display mode can be
  • the information of the obstacle 205 can be mapped to the real-time image acquired by the photographing device on the drone received by the control terminal 204, and the control terminal 204 can configure the interaction device, and the control terminal 204 can The information of the obstacle is mapped to the real-time image displayed by the interaction device, so that the user can observe that the obstacle that is currently triggered to avoid the obstacle is the obstacle 205, and the current drone is avoiding the obstacle 205.
  • Barrier operation that is, the uncontrolled behavior of prompting the user to the current drone is caused by obstacle avoidance operation, and Non
  • the information of the obstacle that triggers the obstacle avoidance operation is determined according to the depth data of the obstacle in the flight space, and then the above The information of the determined obstacle is sent to the control terminal of the drone, and the control terminal maps the information of the obstacle to the real-time image displayed by the interaction device, so that when the obstacle avoidance operation is triggered, the user can learn the current trigger through the control terminal.
  • the obstacle avoidance of the drone is the obstacle in the flight space, which will prompt the user to realize that the current drone is performing obstacle avoidance operation, and will not mistake the drone for its own fault, so that the user can Keep abreast of the current flight status of the drone.
  • the embodiment of the present invention discloses a specific control method for the unmanned aerial vehicle, including the following steps:
  • Step S201 Obtain depth data of obstacles in the flight space.
  • the present embodiment can be acquired in the flight space.
  • the depth data can be correspondingly preprocessed, and the depth data here can be a depth image.
  • the depth data may be subjected to morphological filtering processing to break the narrow gap between the obstacles and/or eliminate the isolated noise and/or the contour of the smooth obstacle, etc., and the subsequent steps may be based on the pre-processed Depth data to expand.
  • the connected areas in the flight space can be searched according to the depth data, and the obstacles in the flight space are extracted by finding the connected areas in the space. Further, the obstacles may be divided by the connected area to distinguish the different obstacles, and different obstacles may be classified and marked at this time.
  • the classification mark can indicate the type of the obstacle, for example, indicating that the obstacle is a user, a car, a building, or the like.
  • Step S202 Establish an obstacle map according to the depth data.
  • the obstacle in the flight space is three-dimensionally modeled based on the depth data to obtain the obstacle map.
  • the drone acquires the position information of the drone and/or the posture information of the drone through the sensor system, and establishes an obstacle map according to the depth data and the position information and/or posture information of the drone, wherein The attitude of the depth sensor is determined based on the attitude information of the drone.
  • the depth data involved in this step is preferentially the above-mentioned preprocessed depth data.
  • the embodiment may specifically establish an obstacle map according to the multi-frame depth data.
  • the depth sensor on the drone can acquire the depth data of the flight space at a certain frame rate, and the depth sensor can acquire the depth data of the obstacles at different positions in the flight space during the flight of the drone in the flight space.
  • the multi-frame depth data acquired by the depth sensor at different positions as described above, combined with the position information and/or posture information of the drone, an obstacle corresponding to the place where all the drones fly can be obtained. map.
  • certain obstacles in the flight space may move, causing the position of the obstacle to change in the flight space.
  • one frame of depth data can only reflect the spatial position of the obstacle at a certain moment.
  • one frame of depth data can only reflect the spatial distribution of the obstacle when the drone is in one position, so combined with the multi-frame depth data acquired by the depth sensor, the spatial position of the obstacle at different times can be obtained, that is, can be more Comprehensively determine the distribution of obstacles in space.
  • the drone can update the obstacle map using the latest depth data acquired by the depth sensor.
  • the obstacle map can reflect the change of the obstacle position in the flight space, and at the same time, can sense the obstacle intact. The spatial distribution of the situation ensures the accuracy of the obstacle map.
  • Step S203 Acquire information of an obstacle that triggers the obstacle avoidance operation from the obstacle map.
  • the drone has already knowed the position distribution of the obstacle in the flight space, and can determine whether to trigger the obstacle avoidance according to the established obstacle map and the position of the drone. operating.
  • the drone can query the obstacle map according to its position information, posture information, etc. to obtain obstacle-related information around the drone, that is, each obstacle around the drone can be acquired. The distance from the drone.
  • the machine can query the obstacle map to determine which obstacle in the flight space is triggered by the current position, and the obstacle map can also obtain the position information and depth data of the obstacle that triggers the obstacle avoidance operation. , Classification marks, etc.
  • Step S204 Send the information of the obstacle to the control terminal of the drone.
  • step S204 and step S103 are the same, and are not described here.
  • corresponding weights are configured for at least two frames of depth data in the multi-frame depth data according to a preset operational model, and an obstacle map is established according to the configured weights and the depth data.
  • the distance between the drone and the obstacle in the flight space is different, that is, the distance between the depth sensor and the obstacle is different, and the depth is different at different measurement distances.
  • the measurement accuracy of the sensor is different. For example, when the TOF camera is about 1 m away from the obstacle, the measured depth data is the most accurate.
  • the measurement accuracy corresponding to each frame of the multi-frame depth data acquired by the depth sensor is different, so
  • the corresponding weights may be configured for the depth data measured at different times according to the preset operation model.
  • the acquired depth data of the frame is configured to be higher.
  • Weight when the obstacle is located within the distance range of the depth sensor measurement accuracy, configure a lower weight for the acquired frame depth data, and establish an obstacle map by configuring the weighted multi-frame depth data. This will make the constructed map more accurate.
  • the preset calculation model is specifically a measurement accuracy model of the depth sensor.
  • the same frame depth data there may be depth data corresponding to the plurality of obstacles, and the distances of the plurality of obstacles may be different from each other, and the accuracy of the depth data corresponding to each obstacle may be different. Differently, the measurement accuracy of the depth data corresponding to the obstacles in different measurement distance ranges may also be different. Therefore, in this embodiment, different obstacles in the same frame depth data may be subjected to block processing, and the block processing is performed. Different obstacles can be separated, and then the corresponding weights can be configured according to the distance of the depth sensor from the obstacle and the accuracy model of the depth sensor for the depth corresponding to each obstacle. Constructing an obstacle map according to the depth data after the weight is configured can make the obstacle map have higher precision.
  • the connected area before the process of performing three-dimensional modeling of an obstacle in the flight space by using the connected area, the connected area is subjected to blocking processing, and then according to the measurement accuracy between the depth sensor and the measured distance.
  • the relation model assigns corresponding weights to each of the blocks obtained by the above-mentioned block processing, and then the weight of each block can be taken into account in the above three-dimensional modeling process.
  • the embodiment may also delete the block whose weight is lower than the preset weight threshold from the connected area.
  • the portion of the obstacle map to be deleted in the obstacle map is determined, and the portion of the obstacle map to be deleted is deleted.
  • Map due to the limited storage space of the drone, if the drone will store the obstacle map of all the flying areas in the storage space of the drone, the storage resources will be caused. Waste. Therefore, it is necessary to delete the constructed obstacle map in a specific area, wherein the specific area may be an area far away from the current position of the drone, or the drone does not enter again within the preset time.
  • the area, the obstacle map of the specific area is determined, and the obstacle map of the specific area is deleted to save the storage resources of the drone.
  • the obstacle map portion to be deleted may be determined according to one or more of the position information of the drone and the preset distance threshold. Specifically, during the flight process, the UAV continuously acquires the depth data of the obstacle through the depth sensor carried by the UAV, and constructs a map of the obstacle through the depth data of the obstacle, and the UAV can include the UAV. The obstacle map in the specific area of the current location is saved, and the obstacle map of the area outside the specific area is determined as the obstacle map part to be deleted.
  • the specific area is further determined according to a preset distance threshold, for example, specific
  • the area may be an area centered on the current position of the drone and with a preset distance threshold radius, and the drone saves the obstacle map in the area, and the obstacles built in other areas outside the area The object map is deleted, so that the drone only needs to save the obstacle map containing the specific area of the current location during the flight, which effectively saves the storage resources of the drone.
  • the distance information of the gap between the obstacles is determined according to the obstacle map, and then the passability information of the gap is evaluated according to the distance information, and then the passability information is transmitted to the control terminal, wherein the passability information
  • the passability result the distance of the gap (the minimum distance of the gap, the maximum distance of the gap, the average distance of the gap, etc.) may be included.
  • the drone often travels between obstacles.
  • the real-time image received by the control terminal from the drone has only RGB information, and the user cannot obtain through the control terminal.
  • the passability between the obstacles so that if the user prejudges the error and walks between the obstacles, the drone may collide with the obstacle and cause a flight accident.
  • the drone can determine the distance information of the gap between the obstacles according to the obstacle map, and evaluate the drone based on the distance information.
  • the passability and the passability result of the evaluation are sent to the control terminal, wherein the passability can be expressed in the form of a safety factor.
  • the drone can also transmit the distance information between the obstacles to the control terminal, and the control terminal.
  • the passability result the distance information of the gap between the obstacles can be mapped to the real-time image displayed by the interaction device, so that the user can intuitively determine the absence by controlling the passability information on the real-time image displayed on the terminal. Whether the man-machine can pass the gap between two specific obstacles or the gap on the obstacle improves the safety of the user operating the drone.
  • the flight path of the obstacle avoiding the triggering obstacle avoidance operation is determined based on the obstacle map.
  • the drone establishes a clear model for the obstacle distribution, the obstacle contour, and the obstacle size in the flight space, and when the obstacle triggers the obstacle avoidance operation of the drone,
  • the man-machine can determine the flight path avoiding the obstacle by querying the obstacle map, for example, determining the shortest flight path to avoid the obstacle according to the obstacle map, if it is determined whether to avoid the left, avoid the right, or Avoid obstacles above the obstacles, or determine the safest flight path to avoid the obstacles based on the obstacle map. For example, if you look up the obstacle map, you find that there are other obstacles on the left and right sides of the obstacle.
  • the obstacle avoidance of the obstacle is determined as the safest flight path.
  • the drone can determine the flight path of the obstacle that avoids the obstacle avoidance operation by querying the obstacle map according to the obstacle map, and the drone avoids the obstacle according to the determined flight path.
  • the information about the flight path described above may also be sent to the control terminal.
  • the drone may transmit the flight path related information to the ground control terminal, and the control terminal may correlate the flight path.
  • the information is mapped to the real-time image displayed by the interactive device, so that the user can know the information of the unmanned flight path of the drone and the flight direction of the drone on the flight path on the real-time image.
  • drones generally only have depth sensors or other detecting devices in the head section, which can only detect obstacles in the direction of the nose, and it is difficult to detect obstacles in other directions of the drone, thereby affecting the drones. Safety.
  • the current position information of the drone and the obstacle map are used to determine the information of the obstacle around the current UAV.
  • the drone can obtain its own position information, so that the unidentified person can be determined.
  • the position of the machine in the constructed obstacle map the drone can obtain the obstacle-related information in any direction of the drone by querying the obstacle map near the location, and exist in any direction of the drone
  • the information of the obstacle can be sent to the control terminal, so that the user can more fully understand the information of the obstacle in the flight space where the drone is currently located.
  • the embodiment of the invention also discloses a method for prompting an obstacle.
  • the method includes:
  • Step S401 Receive a real-time image taken by the photographing device on the drone.
  • the shooting device is configured on the drone, and during the flight of the drone, the shooting device captures the target object in the flight space to obtain a real-time image, and the drone transmits the real-time image to the control through the downlink data link.
  • the terminal, the control terminal may configure an interaction device, and the interaction device displays the real-time image acquired by the photographing device.
  • Step S402 Receive information of an obstacle that is triggered by the drone to trigger the obstacle avoidance operation.
  • the drone transmits information of the obstacle that triggers the obstacle avoidance to the control terminal, and information on how to determine the obstacle may be specifically Reference is made to the corresponding content disclosed in the foregoing embodiments, and details are not described herein.
  • Step S403 Mapping the information of the obstacle to the real-time image displayed by the interaction device.
  • the control terminal receives the real-time image as shown in FIG. 5, the obstacle in the flight space triggers the obstacle avoidance operation of the drone, and the control terminal blocks the information of the obstacle sent by the drone.
  • the information of the object is mapped to the real-time image shown in FIG. 5 displayed by the interaction device, that is, the information of the obstacle is displayed on the real-time image displayed by the interaction device in some form, wherein the display here is not limited to being
  • the information of the obstacles sent by the human machine is directly displayed on the real-time image, and the information of the obstacles can also be converted into other forms, such as icons (digital icons, text icons, graphic icons, etc., and combinations thereof), and then displayed in On the live image. Further, the information of the obstacle may be mapped to the real-time display as shown in FIG. 5 in the form of an icon. On the image, there are several possible ways to achieve this:
  • the foregoing process of mapping the information of the obstacle to the real-time image displayed by the interaction device may include: mapping the information of the obstacle to the obstacle of the obstacle avoidance operation in the real-time image displayed by the interaction device On the object.
  • the real-time image received by the drone includes an obstacle that triggers the obstacle-avoiding operation, that is, the obstacle that triggers the obstacle-avoiding operation is within the shooting range of the shooting device of the drone, as shown in FIG.
  • the obstacle that triggers the obstacle avoidance operation of the drone is the obstacle 501, which can be displayed on the obstacle 501 displayed in the real-time image by the information mapping interaction device of the obstacle 501 transmitted from the drone, that is, according to the information of the obstacle 501, An icon is presented on the obstacle 501 in the live image. In this way, when an icon appears on an obstacle in the real-time image, the user can know that the current drone is performing the obstacle avoidance operation, and knows which obstacle in the real-time image triggers the obstacle avoidance of the drone. operating.
  • the above process of mapping the information of the obstacle to the real-time image displayed by the interaction device may include mapping the information of the obstacle to the edge of the real-time image displayed by the interaction device.
  • the unmanned aerial vehicle can perform obstacle avoidance on the obstacles in the direction of the rear side and the side direction of the drone without the obstacle avoidance device by querying the constructed obstacle map and combining its own position information. Operation, in some cases, the obstacle in the direction is often not within the shooting range of the shooting device of the drone, as shown in FIG.
  • the obstacle that triggers the obstacle avoidance operation is not in the real-time image, at this time.
  • the relative position of the obstacle and the drone is determined according to the information of the obstacle, and the information of the obstacle is mapped in the form of an icon on the edge of the real-time image displayed by the interaction device.
  • the information of the obstacle is mapped in the form of an icon on the edge of the real-time image displayed by the interaction device.
  • FIG. 7 by observing the real-time image, it can be known that the current drone is performing obstacle avoidance operation, and the obstacle that triggers the obstacle avoidance operation is located to the left of the shooting direction of the shooting device of the drone, so that Know which direction the obstacle that triggered the obstacle avoidance is in the drone.
  • the control terminal maps the information of the obstacle to the real-time image displayed by the interaction device, so that when the UAV triggers the obstacle avoidance operation,
  • the control terminal maps the information of the obstacle to the real-time image displayed by the interaction device, so that when the UAV triggers the obstacle avoidance operation.
  • the user knows which obstacle in the flight space is currently being triggered by the control terminal, the user will be aware that the current drone is currently performing obstacle avoidance operation without mistake. It is thought that the drone itself has a fault, so that the user can know the current flight status of the drone.
  • the information of the obstacle is mapped to the interaction device according to one or more of attitude information of the photographing device on the drone, parameter information of the photographing device, and position information of the drone. Displayed on the live image. Specifically, when the obstacle triggers the obstacle avoidance operation of the drone, the drone can transmit the posture information of the photographing device to the control terminal.
  • the parameter information of the photographing device includes at least one of a focal length, an FOV, an internal reference, and an external reference of the photographing device.
  • the information of the obstacles described above may be mapped to the live image displayed by the interactive device in the form of an icon as previously described.
  • the icon of the embodiment is an icon indicating that any obstacle that triggers the obstacle avoidance operation can be prompted, such as As shown in Figures 6 and 7, the information of the obstacle is mapped to the real-time image displayed by the interactive device in the form of a circular dot matrix icon.
  • the icon here is a circular dot matrix icon for illustrative purposes only, and those skilled in the art may adopt other forms of icons, such as a triangular dot matrix icon, a quadrilateral icon dot matrix icon, a highlight icon, a frame selection icon, and the like.
  • the highlight icon highlights or highlights the obstacle that triggers the obstacle avoidance operation in the real-time image, or the highlight icon can highlight or highlight the flash at the edge of the real-time image.
  • the frame selection icon can select an obstacle frame that triggers the obstacle avoidance operation in the real-time image, and further highlights the frame selection icon.
  • the closer to the icon of the edge portion of the obstacle, the higher the transparency, and the setting of the transparency of the icon can be taken by those skilled in the art. No specific restrictions are made here.
  • the size, color, and the like of the icons can be set by the person skilled in the art according to the requirements and visual effects, and are not limited herein.
  • the one or more icon parameters of the type, size, color, and transparency of the icon are specifically parameters determined according to the information of the obstacle.
  • the size of the icon may have a negative correlation with the obstacle depth value corresponding to the icon. That is, in the above icon information, the larger the obstacle depth value corresponding to an icon, the smaller the size of the icon, and conversely, if the obstacle depth value corresponding to an icon is smaller, the icon is The larger the size.
  • the icon information corresponding to the information of the obstacle if the obstacle depth value corresponding to an icon is larger, the color of the icon is shallower. On the contrary, if the obstacle depth value corresponding to an icon is The smaller the icon, the darker the icon.
  • the transparency of the icon may be positively correlated with the obstacle depth value corresponding to the icon. That is, if the obstacle depth value corresponding to an icon is larger, the transparency of the icon is larger. On the contrary, if the obstacle depth value corresponding to an icon is smaller, the transparency of the icon is smaller.
  • the type of the icon may be determined according to the degree of the information amount of the information of the obstacle, and specifically, if the information amount of the information of the obstacle is relatively large, the type of the icon to be used may be a circular icon. When the information amount of the information of the obstacle is relatively small, the type of the icon to be used may be a triangle icon.
  • the corresponding area on the real-time image may be subjected to a flickering process, wherein the minimum obstacle depth value in the information of the obstacle may be used.
  • the size determines the corresponding flicker frequency. Specifically, there is a negative correlation between the magnitude of the minimum obstacle depth value and the flicker frequency.
  • the user may not be able to notice the change of the display content on the real-time image in time, in order to enable the user to know the obstacles in a timely manner.
  • the corresponding voice prompt information is generated according to the information of the obstacle, and then the voice prompt information is played.
  • the passability information of the gap between obstacles transmitted by the drone is received, and then the passability information is mapped onto the live image displayed by the interactive device.
  • the drone can transmit the passability information of the gap between the obstacles to the control terminal, and the control terminal can map the passability information to the real-time image displayed by the interaction device, as shown in FIG.
  • the obstacle 501 has a through hole 503.
  • the drone can acquire the size of the through hole 503, and the drone transmits the passability information of the through hole 503 to the control terminal, for example, control.
  • the terminal may map the security factor (0.3) in the passability information to the through hole 503, and may also map the passability information to the through hole 503 in the form of an icon, for example, when the safety factor is lower than the preset value.
  • the safety factor threshold is used, the user can indicate that the through hole 503 cannot pass through the mapping icon, such as the dotted circular icon in FIG. 8, so that the user can intuitively determine whether the drone can pass the obstacle by observing the real-time image.
  • the gap between objects is used.
  • the flight path related information of the obstacle that is triggered by the drone to avoid the obstacle avoidance operation is received, and then the related information is mapped onto the real-time image displayed by the interaction device.
  • the UAV can determine a flight path for avoiding the obstacle 501 that triggers the obstacle avoidance operation by querying the obstacle map, and the drone can transmit the path related information to the control terminal.
  • the control terminal can map the flight path related information to the real-time image displayed by the interactive device. As shown in FIG. 9, the control terminal can map the flight path 504 to the real-time image displayed by the interactive device.
  • the user can know which flight path will be used to fly the obstacle that triggers the obstacle avoidance when the obstacle avoidance is performed.
  • flight path For the determination process of the foregoing flight path, reference may be made to the corresponding content disclosed in the foregoing embodiments, and details are not described herein again.
  • the information of the corresponding obstacle is mapped to the real-time image captured by the photographing device on the drone, so that when no When the human-machine triggers the obstacle-avoidance operation, the corresponding obstacle information will appear on the real-time image, which will prompt the user to realize that the current drone is performing obstacle avoidance operation without mistakenly thinking that the drone itself A failure has occurred, which improves the user's actual experience.
  • FIG. 10 is a schematic structural diagram of a control device for a drone according to an embodiment of the present invention.
  • the control device 1000 of the drone includes:
  • Depth sensing 1001 acquiring depth data of obstacles in the flight space
  • the one or more processors 1002 working alone or in cooperation, are configured to: determine information of an obstacle that triggers an obstacle avoidance operation based on the depth data, and transmit the information of the obstacle to a control terminal of the drone.
  • the processor 1002 is configured to: establish an obstacle map according to the depth data, and obtain information about an obstacle that triggers an obstacle avoidance operation from the obstacle map.
  • the processor 1002 is further configured to: determine whether to trigger an obstacle avoidance operation according to the obstacle map.
  • the processor 1002 is specifically configured to: determine whether to trigger according to the obstacle map and the location of the drone Obstacle avoidance operation.
  • the depth sensor 1001 is specifically configured to acquire depth data of a multi-frame obstacle in a flight space
  • the processor 1002 is specifically configured to establish an obstacle map according to the multi-frame depth data.
  • the processor 1002 is configured to: configure, according to a preset operation model, a corresponding weight for at least two frames of depth data in the multi-frame depth data, according to the configured weight and corresponding Depth data creates an obstacle map.
  • the preset operational model includes a measurement accuracy model of the depth sensor.
  • the processor 1002 is configured to: acquire location information and/or posture information of the UAV, and establish an obstacle according to the depth data and location information and/or posture information of the UAV. Map of things.
  • the processor 1002 is specifically configured to: perform pre-processing on the acquired depth data, and establish an obstacle map according to the pre-processed depth data.
  • the pre-processing comprises a morphological filtering process.
  • the processor 1002 is further configured to: determine an obstacle map portion to be deleted in the obstacle map, and delete the obstacle map portion to be deleted.
  • the processor 1002 is specifically configured to: determine the obstacle map portion to be deleted according to one or more of position information of the drone and a preset distance threshold.
  • the processor 1002 is further configured to: determine distance information of a gap between obstacles according to the obstacle map, and determine, according to the distance information, passability information of the gap, where The information can be sent to the control terminal through the sexual information.
  • the processor 1002 is further configured to: determine, according to the obstacle map, a flight path that avoids an obstacle that triggers the obstacle avoidance operation.
  • the processor 1002 is further configured to: send information about the flight path to the control terminal.
  • FIG. 11 is a schematic structural diagram of a device for prompting an obstacle according to an embodiment of the present invention.
  • a communication interface 1101 configured to receive a real-time image captured by a photographing device on the drone, and receive information of an obstacle that is triggered by the drone to trigger an obstacle avoidance operation;
  • the processor 1102 is configured to: map the information of the obstacle on the obstacle that triggers the obstacle avoidance operation in the real-time image.
  • the processor 1102 is configured to: map the information of the obstacle to the real-time image displayed by the interaction device. Like the edge.
  • the processor 1102 is configured to: acquire posture information of the photographing device and/or parameter information of the photographing device sent by the drone, according to posture information of the photographing device and/ Or the parameter information of the photographing device maps the information of the obstacle to the real-time image displayed by the interaction device.
  • the parameter information of the photographing device includes one or more of a focal length, an FOV, an internal parameter, and an external reference of the photographing device.
  • the processor 1102 is specifically configured to: map the information of the obstacle to the real-time image displayed by the interaction device in the form of an icon.
  • one or more icon parameters of the type, size, color, and transparency of the icon are parameters determined according to information of the obstacle.
  • the communication interface 1101 is configured to receive passability information of a gap between obstacles sent by the drone;
  • the processor 1102 is further configured to: map the passability information to the real-time image displayed by the interaction device.
  • the communication interface 1101 is configured to receive flight path related information that is sent by the UAV to avoid obstacles that trigger the obstacle avoidance operation;
  • the processor 1102 is further configured to: map the related information to the real-time image displayed by the interaction device.
  • An embodiment of the present invention provides a drone, wherein the drone includes:
  • the information of the corresponding obstacle is mapped to the real-time image captured by the photographing device on the drone, so that when no When the human-machine triggers the obstacle-avoidance operation, the corresponding obstacle information will appear on the real-time image, which will prompt the user to realize that the current drone is performing obstacle avoidance operation without mistakenly thinking that the drone itself A failure has occurred, which improves the user's actual experience.
  • the embodiment of the invention further discloses a control method for a drone, comprising:
  • Step S1201 Acquire a real-time image captured by the photographing device
  • Step S1202 acquiring depth data of an obstacle in the flight space
  • Step S1203 determining, according to the depth data, information of an obstacle that triggers the obstacle avoidance operation
  • Step S1204 Map the information of the obstacle onto the real-time image.
  • Step S1205 Send the mapped real-time image to the control terminal.
  • the processor of the drone may map the information of the obstacle that triggers the obstacle avoidance to the real-time image.
  • the information of the obstacle may be mapped onto the real-time image in the form of an icon.
  • the information of the obstacle is mapped on the obstacle that triggers the obstacle avoidance operation in the real-time image; the information of the obstacle is mapped on the edge of the real-time image.
  • the mapping process in the embodiment is performed on the side of the unmanned aircraft, and the mapping process in the foregoing embodiment is performed on the side of the control terminal.
  • the obstacle is constructed by using the depth data.
  • the method of mapping the information of the map and the obstacle to the real-time image is the same as that of the foregoing embodiment. Therefore, the corresponding part in the foregoing embodiment may be referred to in the embodiment, and details are not described herein again.
  • step S1201 and step S1202 does not require specific requirements, and may be performed sequentially or simultaneously.
  • the embodiment of the invention further discloses a method for prompting an obstacle, comprising:
  • the live image is displayed on an interactive device.
  • the embodiment further discloses a control device for the drone, comprising:
  • a depth sensor for acquiring depth data of an obstacle in a flight space
  • a processor configured to determine, according to the depth data, information of an obstacle that triggers an obstacle avoidance operation
  • the mapped real-time image is sent to the control terminal.
  • the processor of the drone may map the information of the obstacle that triggers the obstacle avoidance to the real-time image.
  • the processor may map the information of the obstacle to the real-time image in the form of an icon.
  • the processor maps the information of the obstacle on the obstacle that triggers the obstacle avoidance operation in the real-time image; the processor maps the information of the obstacle to the edge of the real-time image .
  • the mapping process in the embodiment is performed on the side of the unmanned aircraft, and the mapping process in the foregoing embodiment is performed on the side of the control terminal.
  • the processor uses the depth data.
  • the method of constructing the obstacle map and the information of the obstacle to the real-time image is the same as that of the foregoing embodiment. Therefore, the corresponding part in the foregoing embodiment may be referred to in the embodiment, and details are not described herein again.
  • the embodiment of the invention further discloses a cueing device for an obstacle, comprising:
  • a communication interface configured to receive a real-time image sent by the drone, wherein the real-time image is mapped with information of an obstacle that triggers an obstacle avoidance operation of the drone;
  • a processor for displaying the real-time image on an interactive device.
  • the embodiment of the invention further provides a drone, comprising:
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)
  • Instructional Devices (AREA)

Abstract

一种无人机的控制方法、设备、障碍物的提示方法、设备以及无人机,该控制方法包括:获取飞行空间中的障碍物(205,206)的深度数据,根据深度数据确定触发避障操作的障碍物的信息,将障碍物的信息发送给无人机的控制终端(204)。具体地,根据深度数据确定触发避障操作的障碍物的信息,然后将障碍物的信息发送至控制终端,控制终端在接收到触发避障操作的障碍物的信息时,将障碍物的信息映射到交互装置显示的实时图像上,这样当无人机触发避障操作时,用户可以及时地意识到当前无人机正在进行避障操作,而不会误以为无人机自身出现故障,使得用户能够及时了解无人机当前所处的飞行状态。

Description

无人机的控制方法、设备及障碍物的提示方法、设备 技术领域
本发明涉及无人机技术领域,特别涉及一种无人机的控制方法、设备、障碍物的提示方法、设备以及无人机。
背景技术
当前,无人机技术发展迅速,应用范围也越来越广。为了提高无人机在飞行过程中的安全性能,现有的无人机中普遍加入了避障功能,这样既保护了无人机的安全,也保证了地面人员或物体的安全。
目前,无人机在遇到障碍物时,智能控制***会根据传感器获取的信息控制无人机执行相应的绕飞或悬停等避障操作,从而避免无人机撞上障碍物。然而,无人机在执行避障操作的过程中,用户通过控制终端上输入的操作指令会被智能控制***阻止,由于控制终端上的交互装置提示不明确,很容易让用户产生无人机自身出现故障或者失去控制的错觉,这样用户无法了解无人机当前的运行状态。缺乏有效的障碍物提示方法会降低无人机在某些情况下的有用性。
发明内容
有鉴于此,本发明的目的在于提供一种无人机的控制方法、设备、障碍物的提示方法、设备及无人机,以使用户能够及时了解无人机当前所处的飞行状态。
本发明实施例第一方面提供一种无人机的控制方法,包括:
获取飞行空间中的障碍物的深度数据;
根据所述深度数据确定触发避障操作的障碍物的信息;
将所述障碍物的信息发送给无人机的控制终端。
本发明实施例第二方面提供一种障碍物的提示方法,包括:
接收无人机上的拍摄设备拍摄的实时图像;
接收所述无人机发送的触发避障操作的障碍物的信息;
将所述障碍物的信息映射到交互装置显示的所述实时图像上。
本发明实施例第三方面提供一种无人机的控制设备,包括:
深度传感器,获取飞行空间中的障碍物的深度数据;
处理器,被配置用于:根据所述深度数据确定触发避障操作的障碍物的信息,将所述障碍物的信息发送给无人机的控制终端。
本发明实施例第四方面提供一种障碍物的提示设备,包括:
通讯接口,用于接收无人机上的拍摄设备拍摄的实时图像,以及接收所述无人机发送的触发避障操作的障碍物的信息;
处理器,被配置用于:将所述障碍物的信息映射到交互装置显示的所述实时图像上。
本发明实施例第一方面提供无人机,包括:
机身;
固定在机身上的动力***,用于提供飞行动力;
如第三方面所述的无人机的控制设备。
通过上述公开的技术方案可知,本发明实施例中,在障碍物触发无人机的避障操作时,将会根据飞行空间中障碍物的深度数据来确定触发避障操作对应的障碍物的信息,然后将上述确定出的障碍物的信息发送至无人机的控制终端,控制终端在接收到触发避障操作的障碍物的信息时,将障碍物的信息映射到交互装置显示的实时图像上,这样当有障碍物触发无人机的避障操作时,用户可以通过观看交互装置上映射有障碍物的信息的实时图像,即可以获知当前无人机正在进行避障操作,而不会误以为无人机自身出现故障,同时用户可以知道触发避障操作的障碍物的信息,使得用户能够及时了解无人机当前所处的飞行状态,为用户提供了良好的障碍物提示方式。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本发明实施例公开的一种无人机的控制方法流程图;
图2为本发明实施例公开的飞行空间中无人机的飞行控制过程示意图;
图3为本发明实施例公开的一种具体的无人机的控制方法流程图;
图4为本发明实施例公开的一种障碍物的提示方法流程图;
图5至图9均为本发明实施例公开的实时图像显示效果示意图;
图10为本发明实施例公开的一种无人机的控制设备结构示意图;
图11为本发明实施例公开的一种障碍物的提示设备结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例公开了一种无人机的控制方法,参见图1所示,该方法包括:
步骤S101:获取飞行空间中的障碍物的深度数据。
具体地,参见图2,控制终端204控制无人机在飞行空间中飞行,无人机上安装有深度传感器201,例如深度传感器201可以安装在无人机的机头上,或者安装在无人机的云台上,或者安装在无人机的机腹或机背等,其中深度传感器201如图2所示被安装在无人机的机身内只是为了进行示意性说明,当无人机在飞行空间中飞行时,无人机上的深度传感器201会获取飞行空间中的障碍物205、206的深度数据,根据深度数据可以获取深度图像。其中,获取上述深度数据的深度传感器为所有可以获取深度数据的传感器,所述深度传感器可以具体为TOF相机、RGB相机、双目摄像头、单目摄像头、激光雷达等。在获取到障碍物的深度数据或者深度图像后,通过相应的换算可以获取障碍物205、206的点云信息,这样,通过点云信息,即可以知道在当前的飞行空间内的障碍物205、206的位置信息、轮廓信息、大小信息等等。
另外,无人机上配置有效负载203,其中,有效负载203通过承载件202与无人机的机身连接,其中有效负载可以为拍摄设备,承载件202可以为拍摄设备增稳的部件,例如云台。拍摄设备可以对飞行空间中的物体进行拍摄,同时,无人机可以将拍摄设备拍摄获取的实时图像通过无人机的下行数据链路传输给无人机的控制终端204,控制终端204可以配置有交互装置,交互装置可以显示所述实时图像。其中,本实施例中的控制终端可以为膝上型电脑、平板电脑、智能手机、穿戴式设备(手表、手环)、地面控制站等及其组合。其中交互装置可以包括显示器(例如触摸显示器)、智能手机、平板电脑等。
步骤S102:根据所述深度数据确定触发避障操作的障碍物的信息。
本实施例中,参见图2,上述深度传感器201获取的深度数据能够反映出当前飞行空间中的不同障碍物205、206与无人机之间的距离状况信息,在无人机飞行的过程中,当无人机与飞行空间障碍物接近时,当无人机与障碍物之间的距离小于或等于预设的距离阈值时,例如,当无人机与障碍物 205之间的距离小于或等于预设的距离阈值时,障碍物205触发了无人机的避障操作,为了无人机的飞行安全,无人机将对障碍物205绕飞,或者在障碍物205前面悬停,此时,无人机根据获取的深度数据,确定触发避障操作的障碍物205的信息。其中,所述障碍物205的信息可以包括障碍物205的位置信息、障碍物205的深度数据、障碍物205的分类标记(本文后述部分会对分类标记进行介绍,在这里先不赘述)中的至少一种。
步骤S103:将障碍物的信息发送给无人机的控制终端。
可以理解的是,在确定出触发无人机的避障操作的障碍物205的信息时,无人机可以通过下行数据链路将障碍物205的信息发送给控制终端204,当上述控制终端204获取到障碍物205的信息之后,将可以根据障碍物205的信息对控制终端上的交互装置的显示内容和/或显示方式进行调整控制,以使得经过调整后的显示内容和/或显示方式能够起到提示用户的作用,具体地,可以将障碍物205的信息映射到控制终端204接收到的无人机上的拍摄设备获取的实时图像上,控制终端204可以配置交互装置,控制终端204可以将障碍物的信息映射到交互装置显示的实时图像上,这样用户通过观察映射后得到的图像,即可以知道当前触发避障的障碍物是障碍物205,当前无人机正在对障碍物205进行避障操作,也即起到提示用户当前无人机的不受控制的行为是由于避障操作导致的,而并非是由无人机的故障所导致的。
通过上述公开的技术方案可知,本发明实施例在无人机触发避障操作的情况下,将会根据飞行空间中障碍物的深度数据来确定触发避障操作的障碍物的信息,然后将上述确定出的障碍物的信息发送至无人机的控制终端,控制终端将障碍物的信息映射到交互装置显示的实时图像上,这样当触发避障操作时,用户将可以通过控制终端获知当前触发无人机避障的是飞行空间中的哪个障碍物,此时将会让用户及时地意识到当前无人机正在进行避障操作,而不会误以为无人机自身出现故障,使得用户能够及时了解无人机当前所处的飞行状态。
在图1中实施例公开的无人机的控制方法的基础上,参见图3所示,本发明实施例公开了一种具体的无人机的控制方法,包括以下步骤:
步骤S201:获取飞行空间中的障碍物的深度数据。
本实施例中,考虑到深度传感器获取到的深度数据通常会存在噪声以及由误检测引起的错误信息,导致数据出现不连续性和信息缺失等问题,所以本实施例可以在获取到飞行空间中障碍物的深度数据后,可以对深度数据进行相应的预处理,此处的深度数据具体可以为深度图像。例如,可以对深度数据进行形态学滤波处理,以断开障碍物之间狭窄的间隙和/或消除孤立的噪点和/或平滑障碍物的轮廓等,后续的步骤均可基于上述预处理后的深度数据来展开。
在某些实施例中,由于飞行空间中的障碍物都是连续的,可以根据深度数据对飞行空间中的连通区域进行搜索,通过找出空间中的连通区域,提取出飞行空间中的障碍物,进一步地,可以通过连通区域对障碍物进行分块,将不同的障碍物区分开来,此时可以对不同的障碍物进行分类标记。通过对连通域进行处理,分类标记可以指示障碍物的种类,例如指示障碍物为用户、汽车、建筑等。
步骤S202:根据深度数据建立障碍物地图。
具体地,在获取障碍物的深度数据后,基于上述深度数据,对飞行空间中障碍物进行三维建模,得到上述障碍物地图。具体的,无人机通过传感器***获取无人机的位置信息和/或无人机的姿态信息,根据所述深度数据与无人机的位置信息和/或姿态信息建立障碍物地图,其中可以根据无人机的姿态信息来确定深度传感器的姿态。需要指出的是,本步骤中所涉及的深度数据优先为上述经过预处理后的深度数据。
在某些实施例中,本实施例在建立障碍物地图的过程中,具体可以根据上述多帧深度数据来建立障碍物地图。具体地,无人机上的深度传感器可以以一定的帧率来获取飞行空间的深度数据,无人机在飞行空间中飞行的过程中,深度传感器可以获取飞行空间中不同位置的障碍物的深度数据,根据深度传感器在不同位置获取的所述多帧深度数据,如前所述,并结合无人机的位置信息和/或姿态信息,即可以得到与所有无人机飞行的地方对应的障碍物地图。另外,在某些情况下,飞行空间中的某些障碍物可能会移动,导致障碍物在飞行空间中的位置发生改变,另外,一帧深度数据只能反映障碍物在某一时刻的空间位置,并且一帧深度数据只能反映无人机在一个位置上时障碍物的空间分布,因此结合深度传感器获取的多帧深度数据,即可以得到在不同时刻的障碍物的空间位置,即能够更加全面地确定障碍物在空间中的分布。在判定障碍物移动时,无人机可以使用深度传感器获取的最新的深度数据来更新障碍物地图,此时障碍物地图即可以反映飞行空间中障碍物位置的变化,同时,能够感应障碍物完整的空间分布情况,保证障碍物地图的准确性。
步骤S203:从障碍物地图中获取触发避障操作的障碍物的信息。
具体地,在建立障碍物地图后,无人机便已经知悉障碍物在飞行空间中的障碍物的位置分布情况,可以根据所述建立的障碍物地图和无人机的位置确定是否触发避障操作。无人机在飞行空间中飞行时,无人机可以根据自身的位置信息、姿态信息等查询障碍物地图以获取无人机周围的障碍物相关信息,即可以获取无人机周围每一个障碍物与无人机之间的距离。当无人机与无人机周围的某个障碍物之间的距离小于或等于预设的距离阈值时,可由此确定当前存在障碍物触发了无人机的避障操作,此时,无人机可以查询障碍物地图,以确定在当前的位置上具体是飞行空间中的哪个障碍物触发了避障操作,通过上述障碍物地图还可以获取触发避障操作的障碍物的位置信息、深度数据、 分类标记等。
步骤S204:将障碍物的信息发送给无人机的控制终端。
步骤S204和步骤S103的具体方法和原理一致,此处不再赘述。
在某些实施例中,根据预设的运算模型为多帧深度数据中的至少两帧深度数据配置相应的权值,根据所述配置的权值和所述深度数据建立障碍物地图。具体地,在不同的深度数据测量时刻下,无人机与飞行空间中的障碍物之间的距离不相同,即深度传感器与障碍物之间的距离不相同,在不同的测量距离下,深度传感器的测量精度不一样,例如TOF相机在离障碍物1m左右时,其测量的深度数据是最准确的,深度传感器获取的多帧深度数据中每一帧对应的测量精度会有所不同,所以本实施例可以根据预设的运算模型为不同时刻测量到的深度数据配置对应的权重,当障碍物位于深度传感器测量精度较高的距离范围内时,为获取到的该帧深度数据配置较高的权值,当障碍物位于深度传感器测量精度较低的距离范围内时,为获取到的该帧深度数据配置较低的权值,通过配置权值后的多帧深度数据来建立障碍物地图,这样可以使构建的地图更加精准。其中,上述预设的运算模型具体为深度传感器的测量精度模型。
进一步的,在同一帧深度数据中,可能存在多个障碍物对应的深度数据,所述多个障碍物可能距离深度传感器的距离各不相同,每一个障碍物对应的深度数据的测量精度可能各不相同,不同测量距离范围内的障碍物所对应的深度数据的测量精度也会有所不同,因此,本实施例可以为同一帧深度数据中的不同障碍物进行分块处理,通过分块处理,可以将不同的障碍物进行分开,然后可以根据深度传感器距离障碍物的距离和深度传感器的精度模型为每一个障碍物对应的深度配置相应的权值。根据所述配置权值后的深度数据来构建障碍物地图,可以使得障碍物地图具有更高的精确度。
具体的,本实施例可以在上述利用上述连通区域对飞行空间中的障碍物进行三维建模的过程之前,对上述连通区域进行分块处理,然后根据深度传感器的测量精度与测量距离之间的关系模型,为上述分块处理得到的每个分块分配相应的权重,后续便可在上述三维建模的过程中把每个分块的权重考虑进去。为了降低数据处理量,本实施例还可以从上述连通区域中将权重低于预设权重阈值的分块删除掉。
在某些实施例中,确定障碍物地图中的待删除的障碍物地图部分,将待删除的障碍物地图部分删除。具体地,随着无人机在飞行空间中飞行范围的加大,利用深度数据构建的障碍物的地图也会越来越大,这样无人机需要更大的存储空间去存储构建的障碍物地图,由于考虑到无人机的存储空间有限,如果将无人机将所有飞行过的区域的障碍物地图存储在无人机的存储空间会导致存储资源 的浪费。因此,有必要对特定区域中的已构建的障碍物地图进行删除,其中,特定区域可以是与无人机的当前位置距离较远的区域,或者是无人机在预设时间内没有再次进入的区域,确定特定区域的障碍物地图,将特定区域的障碍物地图删除以节省无人机的存储资源。
进一步地,待删除的障碍物地图部分可以根据无人机的位置信息、预设的距离阈值中的一种或多种确定。具体的,无人机在飞行的过程中,通过无人机携带的深度传感器不断地获取障碍物的深度数据,通过障碍物的深度数据构建障碍物的地图,无人机可以将包括无人机当前的位置的特定区域内的障碍物地图进行保存,将特定区域之外区域的障碍物地图确定为待删除的障碍物地图部分,进一步,特定区域还根据预设的距离阈值来确定,例如特定区域可以为以无人机当前的位置为圆心、以预设的距离阈值为半径构成的区域,无人机保存该区域内的障碍物地图,将该区域之外的其他区域中已构建的障碍物地图删除,这样,无人机在飞行过程中,只需要保存包含当前位置的特定区域的障碍物地图,有效地节省了无人机的存储资源。
在某些实施例中,根据障碍物地图确定障碍物之间的间隙的距离信息,然后根据距离信息评估间隙的可通过性信息,接着将可通过性信息发送给控制终端,其中可通过性信息可以包括可通过性结果、间隙的距离(间隙的最小距离、间隙的最大距离、间隙的平均距离等)。具体地,无人机在飞行过程中,经常会在障碍物之间穿行,在没有构建障碍物地图的情况中,控制终端从无人机接收的实时图像只有RGB信息,用户无法通过控制终端获取障碍物之间的可通过性,这样,若用户预判错误,在障碍物之间穿行时,可能导致无人机碰撞障碍物而产生飞行事故。如前所述,在已经构建了当前飞行空间的障碍物地图的前提下,无人机可以根据障碍物地图确定障碍物之间的间隙的距离信息,并根据所述距离信息评估无人机的可通过性,并将评估的可通过性结果发送给控制终端,其中可通过性可以安全系数的形式表示,另外,无人机还可以将障碍物之间的距离信息发送给控制终端,控制终端可以将可通过性结果、障碍物之间的间隙的距离信息映射到交互装置显示的实时图像上,这样,用户便可通过控制终端上显示的实时图像上的可通过性信息来直观地判定无人机是否能够通过某两个特定的障碍物之间的间隙或者障碍物上的间隙,提高用户操作无人机的安全程度。
在某些实施例中,根据障碍物地图确定避绕触发避障操作的障碍物的飞行路径。具体地,在获取到障碍物地图时,无人机即对飞行空间中障碍物分布、障碍物轮廓、障碍物大小建立了清晰的模型,在障碍物触发无人机的避障操作时,无人机可以通过查询障碍物地图确定避绕该障碍物的飞行路径,例如,根据障碍物地图确定避绕该障碍物的最短飞行路径,如确定是向左避饶、向右避饶,还是从障碍物的上方避饶,或者根据障碍物地图确定避绕该障碍物的最安全飞行路径,如查询障碍物地图发现该障碍物的左、右两边都有其他障碍物,该障碍物的上面没有其他障碍物,则可以将从 该障碍物的上面避饶确定为最安全飞行路径。无人机可以通过查询障碍物地图,根据障碍物地图确定避绕触发避障操作的障碍物的飞行路径,无人机按照确定的飞行路径避饶该障碍物。
进一步的,还可以将上述飞行路径相关的信息发送给控制终端。具体地,在通过查询障碍物地图确定了避饶触发避障操作的障碍物的飞行路径后,无人机可以将飞行路径相关的信息发送给地面的控制终端,控制终端可以将该飞行路径相关的信息映射到交互装置显示的实时图像上,这样用户可以在实时图像上了解无人机的避饶的飞行路径、在飞行路径上时无人机的飞行方向等信息。目前,无人机一般只在机头部分设置了深度传感器或者其他探测设备,只能探测到机头方向的障碍物,难以探测到无人机其他方向上的障碍物,从而影响无人机的安全。在本实施例中,利用无人机当前的位置信息以及上述障碍物地图,确定无人机当前四周的障碍物的信息,具体地,无人机可以获取自身的位置信息,这样可以确定无人机在已经构建的障碍物地图中的位置,无人机通过查询该位置附近的障碍物地图,可以获取无人机的任何方向上的障碍物相关信息,当无人机的任何一个方向上存在触发无人机避障操作的障碍物时,可以将该障碍物的信息发送给控制终端,从而使得用户可以更加全面地了解无人机当前所处的飞行空间中的障碍物的信息。
本发明实施例还公开了一种障碍物的提示方法,参见图4所示,该方法包括:
步骤S401:接收无人机上的拍摄设备拍摄的实时图像。
具体地,无人机上配置拍摄设备,在无人机飞行过程中,拍摄设备对飞行空间中的目标对象进行拍摄以获取实时图像,无人机通过下行数据链路将所述实时图像发送给控制终端,控制终端可以配置交互装置,交互装置对拍摄设备获取的实时图像进行显示。
步骤S402:接收无人机发送的触发避障操作的障碍物的信息。
如前所述,在飞行空间中的障碍物触发了无人机的避障操作时,无人机将触发避障的障碍物的信息发送给控制终端,关于如何确定障碍物的信息,具体可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S403:将障碍物的信息映射到交互装置显示的实时图像上。
具体地,在控制终端接收到如图5所示的实时图像时,飞行空间中的障碍物触发无人机的避障操作,控制终端在接收无人机发送的障碍物的信息时,将障碍物的信息映射到交互装置显示的如图5所示的实时图像上,即以某种形式将障碍物的信息显示在交互装置显示的实时图像上,其中此处的显示并不局限于将无人机发送的障碍物的信息直接显示在实时图像上,也可以将障碍物的信息转换成其他的形式,例如图标(数字图标、文字图标、图形图标等及其组合)的形式,然后显示在实时图像上。进一步地,可以将障碍物的信息以图标的形式映射到交互装置显示的如图5所示的实时 图像上,其中,可以通过如下几种可行的方式实现:
在一种具体的实施方式中,上述将障碍物的信息映射至交互装置显示的实时图像上的过程可以包括:将障碍物的信息映射在交互装置显示的实时图像中的触发避障操作的障碍物上。具体地,当无人机接收到的实时图像中包括触发避障操作的障碍物时,即触发避障操作的障碍物在无人机的拍摄设备的拍摄范围内时,如图6所示,触发无人机避障操作的障碍物是障碍物501,可以将从无人机发送的障碍501的信息映射交互装置显示的到实时图像中的障碍物501上,即根据障碍物501的信息,在实时图像中的障碍物501上呈现出图标。这样,当实时图像中的某个障碍物上出现了图标时,则用户便可以知道当前无人机正在进行避障操作,并且知道是实时图像中的哪个障碍物触发了无人机的避障操作。
在另一种具体的实施方式中,上述将障碍物的信息映射至交互装置显示的实时图像上的过程可以包括:将障碍物的信息映射到交互装置显示的在实时图像的边缘。具体地,如前所述,无人机可以通过查询构建的障碍物地图,并结合自身的位置信息,对无人机的后方、侧方向等没有设置避障设备的方向的障碍物进行避障操作,在某些情况中,在所述方向的障碍物往往不在无人机的拍摄设备的拍摄范围之内,如图7所示,触发避障操作的障碍物不在实时图像中,此时,在接收到无人机发送的障碍物的信息后,根据障碍物的信息确定出障碍物与无人机的相对方位,将障碍物的信息以图标的形式映射在交互装置显示的实时图像的边缘,如图7所示,通过观察实时图像,即可以知道,当前无人机正在进行避障操作,且触发避障操作的障碍物位于无人机的拍摄设备的拍摄方向的左边,这样即可知道触发避障的障碍物在无人机的哪个方位。
通过上述公开的技术方案可知,本发明实施例在无人机触发避障操作的情况下,控制终端将障碍物的信息映射到交互装置显示的实时图像上,这样当无人机触发避障操作时,用户将可以通过控制终端获知当前触发无人机避障的是飞行空间中的哪个障碍物,此时将会让用户及时地意识到当前无人机正在进行避障操作,而不会误以为无人机自身出现故障,使得用户能够及时了解无人机当前所处的飞行状态。
在某些实施例中,根据所述无人机上的拍摄设备的姿态信息、拍摄设备的参数信息、所述无人机的位置信息中的一种或多种将障碍物的信息映射到交互装置显示的实时图像上。具体地,在障碍物触发无人机的避障操作时,无人机可以将拍摄设备的姿态信息发送给控制终端。其中,拍摄设备的参数信息至少包括拍摄设备的焦距、FOV、内参、外参中的一种。
在某些实施例中,如前所述可以以图标的形式将上述障碍物的信息映射至交互装置显示的上述实时图像。具体地,本实施例的图标是表示任何能够对触发避障操作的障碍物进行提示的图标,如 图6和图7所示,以圆形点阵图标的形式将障碍物的信息映射到交互装置显示的实时图像上。另外,这里图标为圆形点阵图标只是为了进行示意性说明,本领域技术人员可以采用其他形式的图标,例如三角形点阵图标、四边形图标点阵图标、高亮图标、框选图标等等,其中高亮图标在实时图像中对触发避障操作的障碍物进行高亮或者高亮闪烁,或者高亮图标可以在实时图像的边缘进行高亮或者高亮闪烁。框选图标可以将实时图像中触发避障操作的障碍物框选出来,进一步地可以对框选图标进行高亮闪烁。另外,在图6中,在触发避障操作的障碍物501上,越靠近障碍物的边缘部分的图标,其透明度越高,对于图标透明度的设置,本领域技术人员可以采取其他方式。在这里不做具体的限定。另外,图标的大小、颜色等,本领域技术人员都可以根据需求和视觉效果进行设置,在这里不作具备的限定。
在某些实施例中,上述图标的种类、大小、颜色、透明度中的一种或多种图标参数具体为根据上述障碍物的信息确定的参数。这样,当实时图像上出现表示上述障碍物的信息的图标信息时,用户便可知道当前无人机触发了避障操作。在与上述障碍物的信息对应的图标信息中,图标的尺寸大小与该图标对应的障碍物深度数值之间可成负相关关系。也即,在上述图标信息中,某一图标所对应的障碍物深度数值越大,则该图标的尺寸越小,相反,若某一图标所对应的障碍物深度数值越小,则该图标的尺寸越大。另外,在与上述障碍物的信息对应的图标信息中,若某一图标所对应的障碍物深度数值越大,则该图标的颜色越浅,相反,若某一图标所对应的障碍物深度数值越小,则该图标的颜色越深。其次,在与上述障碍物的信息对应的图标信息中,图标的透明度与该图标对应的障碍物深度数值之间可成正相关关系。也即,若某一图标所对应的障碍物深度数值越大,则该图标的透明度越大,相反,若某一图标所对应的障碍物深度数值越小,则该图标的透明度越小。另外,根据上述障碍物的信息的信息量大小程度可以确定出图标的种类,具体的,若上述障碍物的信息的信息量比较大的情况下,所采用的图标的种类可以是圆形图标,若上述障碍物的信息的信息量比较小的情况下,所采用的图标的种类可以是三角形图标。
进一步的,在将上述障碍物的信息映射至上述实时图像的某个区域之后,还可以对上述实时图像上的相应区域进行闪烁处理,其中,可以根据上述障碍物的信息中最小障碍物深度数值的大小来确定相应的闪烁频率,具体的,上述最小障碍物深度数值的大小与闪烁频率之间呈负相关关系。
考虑到用户在操控无人机的过程中,经常需要抬头观察无人机的飞行状况,所以用户可能无法及时注意到实时图像上显示内容的变化,为了能够让用户更加及时地获知障碍物对无人机飞行状态的干扰,本发明实施例还可以在将障碍物的信息映射到交互装置显示的实时图像上的时候,根据上述障碍物的信息生成相应的语音提示信息,然后播放该语音提示信息。
在某些实施例中,接收无人机发送的障碍物之间的间隙的可通过性信息,然后将可通过性信息映射到交互装置显示的实时图像上。具体地,如前所述,无人机可以将障碍物之间的间隙的可通过性信息发送给控制终端,控制终端可以将可通过性信息映射在交互装置显示的实时图像中上,如图8所示,障碍物501上有一个通孔503,通过查询构建的地图,无人机可以获取通孔503的大小,无人机将通孔503的可通过性信息发送给控制终端,例如控制终端可以将可通过性信息中的安全系数(0.3)映射到通孔503上,另外还可以以图标的形式将可通过性信息映射到通孔503上,例如,当安全系数低于预设的安全系数阈值时,可以通过映射图标,如图8中的虚线圆形图标,指示用户该通孔503不可通过,这样,用户便可通过观察实时图像即可以直观地判定无人机是否能够通过障碍物之间的间隙。
在某些实施例中,接收无人机发送的避绕触发避障操作的障碍物的飞行路径相关信息,然后将相关信息映射到交互装置显示的实时图像上。具体地,如前所述,无人机可以通过查询障碍物地图,确定对触发避障操作的障碍物501进行避绕的飞行路径,无人机可以将所述路径相关信息发送到控制终端,控制终端可以将飞行路径相关信息映射到交互装置显示的实时图像上,如图9所示,控制终端可以将飞行路径504映射到交互装置显示的实时图像上。这样,通过观察实时图像,用户即可以知道无人机在进行避障时,将按照何种飞行路径对触发避障的障碍物进行绕飞。上述飞行路径的确定过程可以参考前述实施例中公开的相应内容,在此不再进行赘述。
通过上述公开的技术方案可知,本发明实施例在无人机触发避障操作的情况下,会将相应的障碍物的信息映射至无人机上的拍摄设备拍摄到的实时图像上,这样当无人机触发避障操作时,实时图像上将会出现相应的障碍物的信息,此时将会让用户及时地意识到当前无人机正在进行避障操作,而不会误以为无人机自身出现故障,从而改善了用户的实际使用体验。
本发明实施例提供一种无人机的控制设备,图10为本发明实施例提供的无人机的控制设备的结构示意图,所述无人机的控制设备1000,包括:
深度传感1001,获取飞行空间中的障碍物的深度数据;
一个或多个处理器1002,单独或协同地工作,用于:根据所述深度数据确定触发避障操作的障碍物的信息,将所述障碍物的信息发送给无人机的控制终端。
可选地,所述处理器1002,具体用于:根据所述深度数据建立障碍物地图,从所述障碍物地图中获取触发避障操作的障碍物的信息。
可选地,所述处理器1002,还具体用于:根据所述障碍物地图确定是否触发避障操作。
可选地,所述处理器1002,具体用于:根据所述障碍物地图和所述无人机的位置确定是否触发 避障操作。
可选地,所述深度传感器1001,具体用于获取飞行空间中多帧障碍物的深度数据;
所处理器1002,具体用于根据所述多帧深度数据建立障碍物地图。
可选地,所述处理器1002,具体用于:根据预设的运算模型为所述多帧深度数据中的至少两帧深度数据配置相应的权值,根据所述配置的权值以及相应的深度数据建立障碍物地图。
可选地,所述预设的运算模型包括所述深度传感器的测量精度模型。
可选地,所述处理器1002,具体用于:获取所述无人机的位置信息和/或姿态信息,根据所述深度数据与所述无人机的位置信息和/或姿态信息建立障碍物地图。
可选地,所述处理器1002,具体用于:对获取的深度数据进行预处理,根据预处理后的深度数据建立障碍物地图。
可选地,所述预处理包括形态学滤波处理。
可选地,所述处理器1002,还具体用于:确定所述障碍物地图中的待删除的障碍物地图部分,将所述待删除的障碍物地图部分删除。
可选地,所述处理器1002,具体用于:根据所述无人机的位置信息、预设的距离阈值中的一种或多种来确定所述待删除的障碍物地图部分。
可选地,所述处理器1002,还具体用于:根据所述障碍物地图确定障碍物之间的间隙的距离信息,根据所述距离信息确定所述间隙的可通过性信息,将所述可通过性信息发送给所述控制终端。
可选地,所述处理器1002,还具体用于:根据所述障碍物地图确定避绕所述触发避障操作的障碍物的飞行路径。
可选地,所述处理器1002,还具体用于:将所述飞行路径相关的信息发送给所述控制终端。
本发明实施例提供一种障碍物的提示设备,图11为本发明实施例提供的障碍物的提示设备的结构示意图,所述障碍物的提示设备1100,包括:
通讯接口1101,用于接收无人机上的拍摄设备拍摄的实时图像,以及接收所述无人机发送的触发避障操作的障碍物的信息;
一个或多个处理器1102,单独或协同地工作,用于:将所述障碍物的信息映射到交互装置显示的所述实时图像上。
可选地,所述处理器1102,具体用于:将所述障碍物的信息映射在所述实时图像中的所述触发避障操作的障碍物上。
可选地,所述处理器1102,具体用于:将所述障碍物的信息映射在交互装置显示的所述实时图 像的边缘。
可选地,所述处理器1102,具体用于:获取所述无人机发送的所述拍摄设备的姿态信息和/或所述拍摄设备的参数信息,根据所述拍摄设备的姿态信息和/或所述拍摄设备的参数信息将所述障碍物的信息映射到交互装置显示的所述实时图像上。
可选地,所述拍摄设备的参数信息包括:所述拍摄设备的焦距、FOV、内参、外参中的一种或多种。
可选地,所述处理器1102,具体被配置用于:将所述障碍物的信息以图标的形式映射到交互装置显示的所述实时图像上。
可选地,所述图标的种类、大小、颜色、透明度中的一种或多种图标参数为根据所述障碍物的信息确定的参数。
可选地,所述通讯接口1101,用于接收所述无人机发送的障碍物之间的间隙的可通过性信息;
所述处理器1102,还具体用于:将所述可通过性信息映射到交互装置显示的所述实时图像上。
可选地,所述通讯接口1101,用于接收所述无人机发送的避绕所述触发避障操作的障碍物的飞行路径相关信息;
所述处理器1102,还具体被配置用于:将所述相关信息映射到交互装置显示的所述实时图像上。
本发明实施例提供一种无人机,其中,所述无人机,包括:
机身;
固定在机身上的动力***,用于提供飞行动力;
如前所述的无人机的控制设备。
通过上述公开的技术方案可知,本发明实施例在无人机触发避障操作的情况下,会将相应的障碍物的信息映射至无人机上的拍摄设备拍摄到的实时图像上,这样当无人机触发避障操作时,实时图像上将会出现相应的障碍物的信息,此时将会让用户及时地意识到当前无人机正在进行避障操作,而不会误以为无人机自身出现故障,从而改善了用户的实际使用体验。
本发明实施例还公开一种无人机的控制方法,包括:
步骤S1201:获取拍摄设备拍摄的实时图像;
步骤S1202:获取飞行空间中的障碍物的深度数据;
步骤S1203:根据深度数据确定触发避障操作的障碍物的信息;
步骤S1204:将所述障碍物的信息映射到实时图像上。
步骤S1205:将映射后的实时图像发送给控制终端。
具体地,区别于前述实施例,本实施中在根据深度数据确定触发避障操作的障碍物的信息后,无人机的处理器可以将触发避障的障碍物的信息映射到实时图像上,具体地,可以以图标的形式将障碍物的信息映射到所述实时图像上。进一步地,将所述障碍物的信息映射在所述实时图像中的所述触发避障操作的障碍物上;将所述障碍物的信息映射在所述实时图像的边缘。其中,与前述实施例相比,本实施例中映射处理是在无人机一侧进行,前述实施例中映射处理是在控制终端一侧进行,除了这点区别以外,利用深度数据构建障碍物地图、障碍物的信息的映射到实时图像的方法与前述实施例相同,故前述实施例中的对应部分,在本实施例中可以引用,在此不再赘述。
需要说明的是,步骤S1201和步骤S1202之间的先后顺序不做具体的要求,可以先后执行,也可以同时执行。
本发明实施例还公开一种障碍物的提示方法,包括:
接收无人机发送实时图像,其中,所述实时图像中映射有触发无人机避障操作的障碍物的信息;
将所述实时图像显示在交互装置上。
本实施例还公开一种无人机的控制设备,包括:
拍摄设备,用于获取实时图像;
深度传感器,用于获取飞行空间中的障碍物的深度数据;
处理器,用于根据深度数据确定触发避障操作的障碍物的信息;
将所述障碍物的信息映射到实时图像上;
将映射后的实时图像发送给控制终端。
具体地,区别于前述实施例,本实施中在根据深度数据确定触发避障操作的障碍物的信息后,无人机的处理器可以将触发避障的障碍物的信息映射到实时图像上,具体地,处理器,可以以图标的形式将障碍物的信息映射到所述实时图像上。进一步地,处理器,将所述障碍物的信息映射在所述实时图像中的所述触发避障操作的障碍物上;处理器,将所述障碍物的信息映射在所述实时图像的边缘。其中,与前述实施例相比,本实施例中映射处理是在无人机一侧进行,前述实施例中映射处理是在控制终端一侧进行,除了这点区别以外,处理器,利用深度数据构建障碍物地图、障碍物的信息的映射到实时图像的方法与前述实施例相同,故前述实施例中的对应部分,在本实施例中可以引用,在此不再赘述。
本发明实施例还公开一种障碍物的提示设备,包括:
通讯接口,用于接收无人机发送实时图像,其中,所述实时图像中映射有触发无人机避障操作的障碍物的信息;
处理器,用于将所述实时图像显示在交互装置上。
本发明实施例还提供一种无人机,包括:
机身;
固定在机身上的动力***,用于提供飞行动力;
如前所述的无人机的控制设备。
在本发明所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (49)

  1. 一种无人机的控制方法,其特征在于,包括:
    获取飞行空间中的障碍物的深度数据;
    根据所述深度数据确定触发避障操作的障碍物的信息;
    将所述障碍物的信息发送给无人机的控制终端。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述深度数据建立障碍物地图;
    所述根据所述深度数据确定触发避障操作的障碍物的信息包括;
    从所述障碍物地图中获取触发避障操作的障碍物的信息。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    根据所述障碍物地图确定是否触发避障操作。
  4. 根据权利要求3所述的方法,其特征在于,
    所述根据所述障碍物地图确定是否触发避障操作包括:
    根据所述障碍物地图和所述无人机的位置确定是否触发避障操作。
  5. 根据权利要求2-4任一项所述的方法,其特征在于,
    所述获取飞行空间中的障碍物的深度数据包括:
    获取飞行空间中多帧障碍物的深度数据;
    所述根据所述深度数据建立障碍物地图包括:
    根据所述多帧深度数据建立障碍物地图。
  6. 根据权利要求5所述的方法,其特征在于,
    所述根据所述多帧深度数据建立障碍物地图包括:
    根据预设的运算模型为所述多帧深度数据中的至少两帧深度数据配置相应的权值,根据所述配置的权值以及相应的深度数据建立障碍物地图。
  7. 根据权利要求6所述的方法,其特征在于,
    所述预设的运算模型为深度传感器的测量精度模型。
  8. 根据权利要求2-7任一项所述的方法,其特征在于,所述方法包括:
    获取所述无人机的位置信息和/或姿态信息;
    所述根据所述深度数据建立障碍物地图包括:
    根据所述深度数据与所述无人机的位置信息和/或姿态信息建立障碍物地图。
  9. 根据权利要求2-8任一项所述的方法,其特征在于,所述方法还包括:
    对获取的深度数据进行预处理;
    所述根据所述深度数据建立障碍物地图包括:
    根据预处理后的深度数据建立障碍物地图。
  10. 根据权利要求9所述的方法,其特征在于,
    所述预处理包括形态学滤波处理。
  11. 根据权利要求2-10任一项所述的方法,其特征在于,所述方法还包括:
    确定所述障碍物地图中的待删除的障碍物地图部分,将所述待删除的障碍物地图部分删除。
  12. 根据权利要求11所述的方法,其特征在于,
    所述待删除的障碍物地图部分根据所述无人机的位置信息、预设的距离阈值中的一种或多种确定。
  13. 根据权利要求2-12任一项所述的方法,其特征在于,所述方法还包括:
    根据所述障碍物地图确定障碍物之间的间隙的距离信息;
    根据所述距离信息确定所述间隙的可通过性信息;
    将所述可通过性信息发送给所述控制终端。
  14. 根据权利要求2-13任一项所述的方法,其特征在于,所述方法还包括:
    根据所述障碍物地图确定避绕所述触发避障操作的障碍物的飞行路径。
  15. 根据权利要求14所述的方法,其特征在于,所述方法还包括:
    将所述飞行路径相关的信息发送给所述控制终端。
  16. 一种障碍物的提示方法,其特征在于,包括:
    接收无人机上的拍摄设备拍摄的实时图像;
    接收所述无人机发送的触发避障操作的障碍物的信息;
    将所述障碍物的信息映射到交互装置显示的实时图像上。
  17. 根据权利要求16所述的方法,其特征在于,
    将所述障碍物的信息映射到交互装置显示的实时图像上包括:
    将所述障碍物的信息映射在交互装置显示的所述实时图像中的所述触发避障操作的障碍物上。
  18. 根据权利要求16所述的方法,其特征在于,
    将所述障碍物的信息映射到交互装置显示的所述实时图像上包括:
    将所述障碍物的信息映射在交互装置显示的实时图像上所述实时图像的边缘。
  19. 根据权利要求16-18任一项所述的方法,其特征在于,所述方法还包括:
    获取所述无人机发送的所述拍摄设备的姿态信息和/或所述拍摄设备的参数信息;
    所述将所述障碍物的信息映射到交互装置显示的所述实时图像上包括:
    根据所述拍摄设备的姿态信息和/或所述拍摄设备的参数信息将所述障碍物的信息映射到交互装置显示的所述实时图像上。
  20. 根据权利要求19所述的方法,其特征在于,
    所述拍摄设备的参数信息包括:所述拍摄设备的焦距、FOV、内参、外参中的一种或多种。
  21. 根据权利要求16-20任一项所述的方法,其特征在于,
    所述将所述障碍物的信息映射到交互装置显示的所述实时图像上包括:
    将所述障碍物的信息以图标的形式映射到交互装置显示的所述实时图像上。
  22. 根据权利要求21所述的方法,其特征在于,所述方法还包括:
    所述图标的种类、大小、颜色、透明度中的一种或多种图标参数为根据所述障碍物的信息确定的参数。
  23. 根据权利要求16-22任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机发送的障碍物之间的间隙的可通过性信息;
    将所述可通过性信息映射到交互装置显示的所述实时图像上。
  24. 根据权利要求16-23任一项所述的方法,其特征在于,所述方法还包括:
    接收所述无人机发送的避绕所述触发避障操作的障碍物的飞行路径相关信息;
    将所述相关信息映射到交互装置显示的所述实时图像上。
  25. 一种无人机的控制设备,其特征在于,包括:
    深度传感器,获取飞行空间中的障碍物的深度数据;
    一个或多个处理器,单独或协同地工作,用于:根据所述深度数据确定触发避障操作的障碍物的信息,将所述障碍物的信息发送给无人机的控制终端。
  26. 根据权利要求25所述的控制设备,其特征在于,
    所述处理器,具体用于:根据所述深度数据建立障碍物地图,从所述障碍物地图中获取触发避障操作的障碍物的信息。
  27. 根据权利要求26所述的控制设备,其特征在于,
    所述处理器,还具体用于:根据所述障碍物地图确定是否触发避障操作。
  28. 根据权利要求27所述的控制设备,其特征在于,
    所述处理器,具体用于:根据所述障碍物地图和所述无人机的位置确定是否触发避障操作。
  29. 根据权利要求26-28任一项所述的控制设备,其特征在于,
    所述深度传感器,具体用于获取飞行空间中多帧障碍物的深度数据;
    所处理器,具体用于根据所述多帧深度数据建立障碍物地图。
  30. 根据权利要求29所述的控制设备,其特征在于,
    所述处理器,具体用于:根据预设的运算模型为所述多帧深度数据中的至少两帧深度数据配置相应的权值,根据所述配置的权值以及相应的深度数据建立障碍物地图。
  31. 根据权利要求30所述的控制设备,其特征在于,
    所述预设的运算模型包括所述深度传感器的测量精度模型。
  32. 根据权利要求26-31任一项所述的控制设备,其特征在于,
    所述处理器,具体用于:获取所述无人机的位置信息和/或姿态信息,根据所述深度数据与所述无人机的位置信息和/或姿态信息建立障碍物地图。
  33. 根据权利要求26-32任一项所述的控制设备,其特征在于,
    所述处理器,具体用于:对获取的深度数据进行预处理,根据预处理后的深度数据建立障碍物地图。
  34. 根据权利要求33所述的控制设备,其特征在于,
    所述预处理包括形态学滤波处理。
  35. 根据权利要求26-34任一项所述的控制设备,其特征在于,
    所述处理器,还具体用于:确定所述障碍物地图中的待删除的障碍物地图部分,将所述待删除的障碍物地图部分删除。
  36. 根据权利要求35所述的控制设备,其特征在于,
    所述处理器,具体用于:根据所述无人机的位置信息、预设的距离阈值中的一种或多种来确定所述待删除的障碍物地图部分。
  37. 根据权利要求26-36任一项所述的控制设备,其特征在于,
    所述处理器,还具体用于:根据所述障碍物地图确定障碍物之间的间隙的距离信息,根据所述距离信息确定所述间隙的可通过性信息,将所述可通过性信息发送给所述控制终端。
  38. 根据权利要求26-37任一项所述的控制设备,其特征在于,
    所述处理器,还具体用于:根据所述障碍物地图确定避绕所述触发避障操作的障碍物的飞行路径。
  39. 根据权利要求38所述的控制设备,其特征在于,
    所述处理器,还具体用于:将所述飞行路径相关的信息发送给所述控制终端。
  40. 一种障碍物的提示设备,其特征在于,包括:
    通讯接口,用于接收无人机上的拍摄设备拍摄的实时图像,以及接收所述无人机发送的触发避障操作的障碍物的信息;
    一个或多个处理器,单独或协同地工作,用于:将所述障碍物的信息映射到交互装置显示的实时图像上。
  41. 根据权利要求40所述的提示设备,其特征在于,
    所述处理器,具体用于:将所述障碍物的信息映射在交互装置显示的所述实时图像中的所述触发避障操作的障碍物上。
  42. 根据权利要求40或41所述的提示设备,其特征在于,
    所述处理器,具体用于:将所述障碍物的信息映射在交互装置显示的所述实时图像的边缘。
  43. 根据权利要求40-42任一项所述的提示设备,其特征在于,
    所述处理器,具体用于:获取所述无人机发送的所述拍摄设备的姿态信息和/或所述拍摄设备的参数信息,根据所述拍摄设备的姿态信息和/或所述拍摄设备的参数信息将所述障碍物的信息映射到交互装置显示的所述实时图像上。
  44. 根据权利要求43所述的提示设备,其特征在于,
    所述拍摄设备的参数信息包括:所述拍摄设备的焦距、FOV、内参、外参中的一种或多种。
  45. 根据权利要求40-44任一项所述的提示设备,其特征在于,
    所述处理器,具体被配置用于:将所述障碍物的信息以图标的形式映射交互装置显示的到所述实时图像上。
  46. 根据权利要求45所述的提示设备,其特征在于,
    所述图标的种类、大小、颜色、透明度中的一种或多种图标参数为根据所述障碍物的信息确定的参数。
  47. 根据权利要求40-46任一项所述的提示设备,其特征在于,
    所述通讯接口,用于接收所述无人机发送的障碍物之间的间隙的可通过性信息;
    所述处理器,还具体用于:将所述可通过性信息映射到交互装置显示的所述实时图像上。
  48. 根据权利要求40-47任一项所述的提示设备,其特征在于,
    所述通讯接口,用于接收所述无人机发送的避绕所述触发避障操作的障碍物的飞行路径相关信 息;
    所述处理器,还具体被配置用于:将所述相关信息映射到交互装置显示的所述实时图像上。
  49. 一种无人机,其特征在于,包括:
    机身;
    固定在机身上的动力***,用于提供飞行动力;
    如权利要求25-29任一项所述的无人机的控制设备。
PCT/CN2017/082190 2017-04-27 2017-04-27 无人机的控制方法、设备及障碍物的提示方法、设备 WO2018195857A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2017/082190 WO2018195857A1 (zh) 2017-04-27 2017-04-27 无人机的控制方法、设备及障碍物的提示方法、设备
CN201780005315.7A CN108521807B (zh) 2017-04-27 2017-04-27 无人机的控制方法、设备及障碍物的提示方法、设备
CN202210263027.4A CN114815863A (zh) 2017-04-27 2017-04-27 无人机的控制方法、设备及障碍物的提示方法、设备
US16/663,902 US11797028B2 (en) 2017-04-27 2019-10-25 Unmanned aerial vehicle control method and device and obstacle notification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/082190 WO2018195857A1 (zh) 2017-04-27 2017-04-27 无人机的控制方法、设备及障碍物的提示方法、设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/663,902 Continuation US11797028B2 (en) 2017-04-27 2019-10-25 Unmanned aerial vehicle control method and device and obstacle notification method and device

Publications (1)

Publication Number Publication Date
WO2018195857A1 true WO2018195857A1 (zh) 2018-11-01

Family

ID=63433086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/082190 WO2018195857A1 (zh) 2017-04-27 2017-04-27 无人机的控制方法、设备及障碍物的提示方法、设备

Country Status (3)

Country Link
US (1) US11797028B2 (zh)
CN (2) CN108521807B (zh)
WO (1) WO2018195857A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583384A (zh) * 2018-11-30 2019-04-05 百度在线网络技术(北京)有限公司 用于无人驾驶车的避障方法和装置
CN114022760A (zh) * 2021-10-14 2022-02-08 湖南北斗微芯数据科技有限公司 铁路隧道障碍物监测预警方法、***、设备及存储介质

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778114A (zh) * 2014-11-07 2021-12-10 索尼公司 控制***、控制方法以及存储介质
CN110892353A (zh) * 2018-09-30 2020-03-17 深圳市大疆创新科技有限公司 控制方法、控制装置、无人飞行器的控制终端
EP3876070B1 (en) 2018-11-21 2024-02-28 Autel Robotics Co., Ltd. Method and device for planning path of unmanned aerial vehicle, and unmanned aerial vehicle
WO2020107454A1 (zh) * 2018-11-30 2020-06-04 深圳市大疆创新科技有限公司 障碍物的精准确定方法、设备及计算机可读存储介质
CN111326023B (zh) * 2018-12-13 2022-03-29 丰翼科技(深圳)有限公司 一种无人机航线预警方法、装置、设备及存储介质
WO2020215198A1 (zh) * 2019-04-23 2020-10-29 深圳市大疆创新科技有限公司 一种数据处理方法、装置、设备及可移动平台
CN110244760A (zh) * 2019-06-06 2019-09-17 深圳市道通智能航空技术有限公司 一种避障方法、装置和电子设备
CN112578808A (zh) * 2019-09-29 2021-03-30 ***通信有限公司研究院 信息收集的方法、装置、计算机设备和存储介质
CN110825106B (zh) * 2019-10-22 2022-04-22 深圳市道通智能航空技术股份有限公司 一种飞行器的避障方法、飞行器、飞行***及存储介质
CN110673647B (zh) * 2019-11-07 2022-05-03 深圳市道通智能航空技术股份有限公司 全向避障方法及无人飞行器
WO2021212519A1 (zh) * 2020-04-24 2021-10-28 深圳市大疆创新科技有限公司 飞行指引方法、装置、***、遥控终端及可读存储介质
CN112771464A (zh) * 2020-04-27 2021-05-07 深圳市大疆创新科技有限公司 飞行提示方法、装置、控制终端、***及存储介质
CN112173103B (zh) * 2020-07-03 2022-02-15 中建交通建设集团有限公司 一种用于钻爆法施工隧道工作面检测装置及方法
CN113795803B (zh) * 2020-08-17 2024-05-14 深圳市大疆创新科技有限公司 无人飞行器的飞行辅助方法、设备、芯片、***及介质
CN114202614B (zh) * 2021-12-03 2022-09-27 浙江省土地信息中心有限公司 一种不动产三维建模方法、***、存储介质及智能终端
CN114594798A (zh) * 2022-03-18 2022-06-07 广州极飞科技股份有限公司 地图构建方法、路径规划方法、装置及电子设备
WO2023184487A1 (zh) * 2022-04-01 2023-10-05 深圳市大疆创新科技有限公司 无人机避障方法、装置、无人机、遥控设备和存储介质
CN115631660A (zh) * 2022-12-07 2023-01-20 南通翔昇人工智能科技有限公司 一种基于云计算的无人机安防监管***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102528811A (zh) * 2011-12-19 2012-07-04 上海交通大学 托克马克腔内的机械臂定位与避障***
KR20130009894A (ko) * 2013-01-05 2013-01-23 이상윤 공간정보기술을 이용한 근거리 정밀타격 무인항공기시스템
CN105138002A (zh) * 2015-09-10 2015-12-09 华南农业大学 基于激光与双目视觉的无人机避险探测***及方法
CN204946369U (zh) * 2015-08-24 2016-01-06 深圳市诺亚星辰科技开发有限公司 一种无人直升机自动预警***
CN105517666A (zh) * 2014-09-05 2016-04-20 深圳市大疆创新科技有限公司 基于情景的飞行模式选择
CN106527424A (zh) * 2016-09-20 2017-03-22 深圳市银星智能科技股份有限公司 移动机器人及移动机器人的导航方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7375678B2 (en) * 2005-06-29 2008-05-20 Honeywell International, Inc. Displaying obstacles in perspective view
US8244469B2 (en) * 2008-03-16 2012-08-14 Irobot Corporation Collaborative engagement for target identification and tracking
US9390559B2 (en) * 2013-03-12 2016-07-12 Honeywell International Inc. Aircraft flight deck displays and systems and methods for enhanced display of obstacles in a combined vision display
CN103439973B (zh) * 2013-08-12 2016-06-29 桂林电子科技大学 自建地图家用清洁机器人及清洁方法
KR20150051735A (ko) * 2013-11-05 2015-05-13 현대모비스 주식회사 주차 보조 시스템 및 그 방법
CN107577247B (zh) * 2014-07-30 2021-06-25 深圳市大疆创新科技有限公司 目标追踪***及方法
EP3855276A1 (en) * 2014-09-05 2021-07-28 SZ DJI Technology Co., Ltd. Multi-sensor environmental mapping
CN104750110A (zh) * 2015-02-09 2015-07-01 深圳如果技术有限公司 一种无人机的飞行方法
US10831186B2 (en) * 2015-04-14 2020-11-10 Vantage Robotics, Llc System for authoring, executing, and distributing unmanned aerial vehicle flight-behavior profiles
US10019907B2 (en) * 2015-09-11 2018-07-10 Qualcomm Incorporated Unmanned aerial vehicle obstacle detection and avoidance
US10665115B2 (en) * 2016-01-05 2020-05-26 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
CN105761265A (zh) * 2016-02-23 2016-07-13 英华达(上海)科技有限公司 利用影像深度信息提供避障的方法及无人飞行载具
CN105973230B (zh) * 2016-06-30 2018-09-28 西安电子科技大学 一种双无人机协同感知与规划方法
US20180033318A1 (en) * 2016-07-29 2018-02-01 Ge Aviation Systems Llc Sense and avoid maneuvering
JP2018036102A (ja) * 2016-08-30 2018-03-08 ソニーセミコンダクタソリューションズ株式会社 測距装置、および、測距装置の制御方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102528811A (zh) * 2011-12-19 2012-07-04 上海交通大学 托克马克腔内的机械臂定位与避障***
KR20130009894A (ko) * 2013-01-05 2013-01-23 이상윤 공간정보기술을 이용한 근거리 정밀타격 무인항공기시스템
CN105517666A (zh) * 2014-09-05 2016-04-20 深圳市大疆创新科技有限公司 基于情景的飞行模式选择
CN204946369U (zh) * 2015-08-24 2016-01-06 深圳市诺亚星辰科技开发有限公司 一种无人直升机自动预警***
CN105138002A (zh) * 2015-09-10 2015-12-09 华南农业大学 基于激光与双目视觉的无人机避险探测***及方法
CN106527424A (zh) * 2016-09-20 2017-03-22 深圳市银星智能科技股份有限公司 移动机器人及移动机器人的导航方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583384A (zh) * 2018-11-30 2019-04-05 百度在线网络技术(北京)有限公司 用于无人驾驶车的避障方法和装置
WO2020107974A1 (zh) * 2018-11-30 2020-06-04 百度在线网络技术(北京)有限公司 用于无人驾驶车的避障方法和装置
CN114022760A (zh) * 2021-10-14 2022-02-08 湖南北斗微芯数据科技有限公司 铁路隧道障碍物监测预警方法、***、设备及存储介质

Also Published As

Publication number Publication date
CN108521807A (zh) 2018-09-11
CN108521807B (zh) 2022-04-05
US11797028B2 (en) 2023-10-24
CN114815863A (zh) 2022-07-29
US20200201361A1 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
WO2018195857A1 (zh) 无人机的控制方法、设备及障碍物的提示方法、设备
US11726498B2 (en) Aerial vehicle touchdown detection
KR102366293B1 (ko) 디지털 트윈을 이용한 증강현실 기반 현장 모니터링 시스템 및 방법
US20190250601A1 (en) Aircraft flight user interface
US10692387B2 (en) Method and device for setting a flight route
US20210289141A1 (en) Control method and apparatus for photographing device, and device and storage medium
KR102631147B1 (ko) 공항용 로봇 및 그의 동작 방법
WO2018086130A1 (zh) 飞行轨迹的生成方法、控制装置及无人飞行器
CN106292799B (zh) 无人机、遥控装置及其控制方法
WO2021078003A1 (zh) 无人载具的避障方法、避障装置及无人载具
CN107015638A (zh) 用于向头戴式显示器用户报警的方法和装置
US12018947B2 (en) Method for providing navigation service using mobile terminal, and mobile terminal
CN107000832A (zh) 无人机起落架控制方法、装置、无人机及其***
KR20210086072A (ko) 실시간 현장 작업 모니터링 방법 및 시스템
US20180276997A1 (en) Flight tag obtaining method, terminal, and server
CN107450573B (zh) 飞行拍摄控制***和方法、智能移动通信终端、飞行器
CN108700882A (zh) 飞行控制方法及装置、监测方法及装置、存储介质
CN112771438B (zh) 利用二维输入选择对三维深度图像进行深度雕塑
US11509809B2 (en) Following control method, control terminal, and unmanned aerial vehicle
US20220081114A1 (en) Method and device for controlling flight, control terminal, flight system and processor
CN110187720A (zh) 无人机导引方法、装置、***、介质及电子设备
WO2018195905A1 (zh) 一种无人机手掌降落的控制方法、控制设备及无人机
CN109977845A (zh) 一种可行驶区域检测方法及车载终端
CN107065920A (zh) 避障控制方法、装置及无人机
US20190072986A1 (en) Unmanned aerial vehicles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17906835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17906835

Country of ref document: EP

Kind code of ref document: A1