CN112771576A - Position information acquisition method, device and storage medium - Google Patents

Position information acquisition method, device and storage medium Download PDF

Info

Publication number
CN112771576A
CN112771576A CN202080005236.8A CN202080005236A CN112771576A CN 112771576 A CN112771576 A CN 112771576A CN 202080005236 A CN202080005236 A CN 202080005236A CN 112771576 A CN112771576 A CN 112771576A
Authority
CN
China
Prior art keywords
target object
camera
position information
acquiring
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080005236.8A
Other languages
Chinese (zh)
Inventor
周琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112771576A publication Critical patent/CN112771576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

A position information acquisition method, apparatus and storage medium, including, gather the multiple picture (S101) through the camera (133); acquiring offset angles (S102) of a camera (133) when acquiring a plurality of images; acquiring coordinate positions of the target object in the plurality of images (S103); determining a distance (S104) between the target object and the camera (133) according to the coordinate position and the offset angle; position information of the object is determined based on the distance between the object and the camera (133) (S105). The accuracy of position information acquisition is improved.

Description

Position information acquisition method, device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, a device, and a storage medium for acquiring location information.
Background
With the increase of application scenes of unmanned aerial vehicles, the demands on depth maps, distance measurement, target position acquisition and the like are more and more strong. Taking the target position acquisition as an example, the current target position method generally collects images through at least two cameras preset by the unmanned aerial vehicle, and determines the target position according to the pixel position relationship among a plurality of images, taking two cameras as an example, a certain distance is needed between the two cameras at this time, so that the volume of the unmanned aerial vehicle cannot be small, the two cameras cannot be loaded when the volume of the unmanned aerial vehicle is small, and the cost of the unmanned aerial vehicle for realizing the target position of the two cameras is high. And the relative position between the two cameras is fixed, and the relative position between the two cameras needs to be calibrated before leaving the factory, and cannot be changed after leaving the factory, if the relative position is changed, the two cameras need to be corrected again, and if the change of the positions of the two cameras is too large, the correction may not be performed, which brings inconvenience to the use, and makes the accuracy of the obtained target position lower. In addition, due to the limitation of cost, the binocular camera is low in resolution and short in detection distance, and the accuracy of target position acquisition is reduced.
Disclosure of Invention
The embodiment of the application provides a position information acquisition method, position information acquisition equipment and a storage medium, and can improve the accuracy of position information acquisition.
In a first aspect, an embodiment of the present application provides a method for acquiring location information, including:
acquiring a plurality of images through a camera;
acquiring offset angles of the camera when the plurality of images are acquired;
acquiring coordinate positions of the target objects in the multiple images;
determining a distance between the target object and the camera according to the coordinate position and the offset angle;
and determining the position information of the target object according to the distance between the target object and the camera.
In a second aspect, an embodiment of the present application further provides a location information acquiring system, including:
a memory for storing a computer program;
and the processor is used for calling the computer program in the memory so as to execute any position information acquisition method provided by the embodiment of the application.
In a third aspect, an embodiment of the present application further provides a remote control terminal, including:
a display for displaying an image and position information of the target object;
a memory for storing a computer program;
and the processor is used for calling the computer program in the memory so as to execute any position information acquisition method provided by the embodiment of the application.
In a fourth aspect, an embodiment of the present application further provides a pan-tilt camera, including:
a camera for capturing an image;
the holder is used for carrying the camera;
a memory for storing a computer program;
and the processor is used for calling the computer program in the memory so as to execute any position information acquisition method provided by the embodiment of the application.
In a fifth aspect, an embodiment of the present application further provides a movable platform, including:
a camera for acquiring a plurality of images;
a memory for storing a computer program;
and the processor is used for calling the computer program in the memory so as to execute any position information acquisition method provided by the embodiment of the application.
In a sixth aspect, an embodiment of the present application further provides a storage medium, where the storage medium is used to store a computer program, and the computer program is loaded by a processor to execute any one of the position information obtaining methods provided in the embodiments of the present application.
In a seventh aspect, an embodiment of the present application further provides a computer program, where the computer program is loaded by a processor to execute any one of the position information obtaining methods provided in the embodiment of the present application.
The embodiment of the application can acquire a plurality of images through the camera, acquire the offset angle of the camera when acquiring the plurality of images, acquire the coordinate position of the target object in the plurality of images, and then determine the distance between the target object and the camera according to the coordinate position and the offset angle, and at the moment, determine the position information of the target object according to the distance between the target object and the camera. According to the scheme, the position information of the target object is acquired through the single camera, the cost is reduced, the position information of the target object is accurately acquired based on the offset angle, the distance between the target object and the camera and the like, and the accuracy of acquiring the position information is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an application scenario of a location information obtaining method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a location information obtaining method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of selecting an object from a list of objects provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of selecting an object from an image, provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an embodiment of the present application based on automatic identification of a target object;
FIG. 6 is a schematic diagram of distance determination between a target and a camera provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of determining position information of a target object provided by an embodiment of the present application;
FIG. 8 is another schematic diagram of the determination of the position information of the target object provided by the embodiments of the present application;
fig. 9 is a schematic diagram illustrating position information of a target object displayed by a pop-up window according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of labeling a target object and displaying position information of the target object according to an embodiment of the present application;
FIG. 11 is another schematic diagram of an embodiment of the present application for labeling a target object and displaying position information of the target object;
fig. 12 is a schematic structural diagram of a position information acquisition system according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a remote control terminal provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of a pan-tilt camera provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a movable platform provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Embodiments of the present application provide a position information acquiring method, device, and storage medium, which are used to determine a distance between a target object and a camera based on offset angles when multiple images are acquired and a coordinate position of the target object, and determine position information of the target object according to the distance between the target object and the camera, thereby improving accuracy of acquiring position information of the target object.
The storage medium is a computer-readable storage medium, the device may include a position information acquisition system, a remote control terminal, a pan-tilt camera, a movable platform, and the like, types of the position information acquisition system, the remote control terminal, the pan-tilt camera, the movable platform, and the like may be flexibly set according to actual needs, and specific content is not limited here. For example, the position information acquisition system may include a camera, or the position information acquisition system may include a drone provided with a camera and a remote control terminal for controlling the drone, or the like.
For example, the remote control terminal may be a remote control device provided with a display for displaying an image and position information of an object, and control keys and the like, for establishing a communication connection with and controlling the movable platform. The remote control terminal can also be a third-party mobile phone or a tablet personal computer and the like, establishes communication connection with the movable platform through a preset protocol, and controls the movable platform.
The pan/tilt/zoom camera may include a camera, a pan/tilt head, and the like, and the pan/tilt head may include a shaft arm and the like, and the shaft arm may drive the camera to move, for example, control the camera to move to a suitable position through the shaft arm, so as to acquire an image including a target object through the camera. The camera may be a monocular camera, and the type of the camera may be an ultra-wide angle camera, a telephoto camera (i.e., a zoom camera), an infrared camera, a far infrared camera, an ultraviolet camera, a Time of Flight (TOF) depth camera (TOF depth camera for short), and the like.
This movable platform can include cloud platform, platform body and camera etc. and this platform body can be used for carrying on the cloud platform, and this cloud platform can carry on the camera to make the cloud platform can drive the camera and remove. In particular, the type of the movable platform can be flexibly set according to actual needs. For example, the movable platform may be a mobile terminal, a drone, a robot, or a vehicle, which may be an unmanned vehicle, or the like. The unmanned aerial vehicle can include a camera, a distance measuring device, an obstacle sensing device and the like. This unmanned aerial vehicle can also be including the cloud platform that is used for carrying on the camera, and this cloud platform can drive the camera and move to suitable position to gather required image through the camera. The drone may include a rotor-type drone (e.g., a quad-rotor drone, a hexa-rotor drone, or an octa-rotor drone, etc.), a fixed-wing drone, or a combination of a rotor-type drone and a fixed-wing drone, without limitation.
The movable platform may further be provided with a Positioning device such as a Global Positioning System (GPS), and the position relationship between the camera and the Positioning device may be on the same plane, and in the plane, the camera and the Positioning device may be on the same straight line or form a preset included angle; of course, the camera and the positioning device may be located on different planes, and the positional relationship between the camera and the positioning device may be converted.
Fig. 1 is a schematic view of a scene for implementing the position information obtaining method provided in this embodiment, as shown in fig. 1, taking a movable platform as an unmanned aerial vehicle as an example, a remote control terminal 100 is in communication connection with an unmanned aerial vehicle 200, the remote control terminal 100 may be configured to control the flight of the unmanned aerial vehicle 200 or execute corresponding actions, and obtain corresponding motion information from the unmanned aerial vehicle 200, where the motion information may include flight direction, flight attitude, flight altitude, flight speed, position information, and the like, and send the obtained motion information to the remote control terminal 100, and the remote control terminal 100 performs analysis, display, and the like. The remote control terminal 100 may also receive a control command input by the user, and accordingly control a distance measuring device or a camera on the drone 200 based on the control command. For example, the remote control terminal 100 may receive a shooting instruction or a distance measurement instruction input by a user, and send the shooting instruction or the distance measurement instruction to the drone 200, and the drone 200 may control a camera to shoot a captured picture according to the shooting instruction, or control a distance measurement device to measure a distance of a target object according to the distance measurement instruction, and the like.
In some embodiments, the obstacle sensing device of the unmanned aerial vehicle 200 may acquire sensing signals around the unmanned aerial vehicle 200, and by analyzing the sensing signals, obstacle information may be obtained, and obstacle information is displayed in the display of the unmanned aerial vehicle 200, so that the user may know the obstacle sensed by the unmanned aerial vehicle 200, and the user may control the unmanned aerial vehicle 200 to avoid the obstacle. The display can be a liquid crystal display screen, a touch screen and the like.
In some embodiments, the obstacle sensing device may comprise at least one sensor for acquiring a sensing signal from the drone 200 in at least one direction. For example, the obstacle sensing device may include a sensor for detecting an obstacle in front of the drone 200. For example, the obstacle sensing device may include two sensors for detecting obstacles in front of and behind the drone 200, respectively. For example, the obstacle sensing device may include four sensors for detecting obstacles and the like in front of, behind, to the left, and to the right of the drone 200, respectively. For example, the obstacle sensing device may include five sensors for detecting obstacles and the like in front of, behind, to the left, to the right, and above the drone 200, respectively. For example, the obstacle sensing device may include six sensors for detecting obstacles in front of, behind, to the left of, to the right of, above, and below the drone 200, respectively. The sensors in the obstacle sensing device may be implemented separately or integrally. The detection direction of the sensor can be set according to specific needs to detect obstacles in various directions or direction combinations, and is not limited to the form disclosed in the present application.
The drone 200 may have one or more propulsion units to support the drone 200 in flight. The one or more propulsion units may cause the drone 200 to move at one or more, two or more, three or more, four or more, five or more, six or more free angles. In some cases, the drone 200 may rotate about one, two, three, or more axes of rotation. The axes of rotation may be perpendicular to each other. The axes of rotation may be maintained perpendicular to each other throughout the flight of the drone 200. The axis of rotation may include a pitch axis, a roll axis, and/or a yaw axis. The drone 200 may be movable in one or more dimensions. For example, the drone 200 can move upward due to the lift generated by one or more rotors. In some cases, the drone 200 may be movable along a Z-axis (which may be upward with respect to the drone 200 direction), an X-axis, and/or a Y-axis (which may be lateral). The drone 200 may move along one, two, or three axes that are perpendicular to each other.
Drone 200 may be a rotorcraft. In some cases, drone 200 may be a multi-rotor drone that may include multiple rotors. The plurality of rotors may rotate to generate lift for the drone 200. The rotor may be a propulsion unit that allows the drone 200 to move freely in the air. The rotors may rotate at the same rate and/or may produce the same amount of lift or thrust. The rotors may rotate at will at different rates, generating different amounts of lift or thrust and/or allowing the drone 200 to rotate. In some cases, one, two, three, four, five, six, seven, eight, nine, ten, or more rotors may be provided on the drone 200. The rotors may be arranged with their axes of rotation parallel to each other. In some cases, the axes of rotation of the rotors may be at any angle relative to each other, which may affect the motion of the drone 200.
The drone 200 may have multiple rotors. The rotor may be connected to the body of the drone 200, which may include a control unit, an Inertial Measurement Unit (IMU), a processor, a battery, a power source, and/or other sensors. The rotor may be connected to the body by one or more arms or extensions that branch off from a central portion of the body. For example, one or more arms may extend radially from the central body of the drone 200 and may have rotors at or near the ends of the arms.
It should be noted that the device structures in fig. 1 do not constitute a limitation on an application scenario of the location information acquisition method.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a method for acquiring location information according to an embodiment of the present application. The position information acquisition method can be applied to equipment such as a position information acquisition system, a remote control terminal, a pan-tilt camera or a movable platform and the like, and is used for accurately acquiring the position information of the target object. As will be described in detail below.
As shown in fig. 2, the position information acquiring method may include steps S101 to S105, and the like, and specifically may be as follows:
s101, collecting a plurality of images through a camera.
The plurality of images can be acquired continuously by the camera or at preset time intervals to obtain the plurality of images, and the corresponding time (namely the acquisition time point) of each acquired image is different.
For example, an image acquisition instruction may be generated and sent to the camera, the camera is controlled to acquire a plurality of images based on the image acquisition instruction, and the plurality of images returned by the camera are received.
S102, acquiring offset angles of the camera when a plurality of images are acquired.
The offset angle may be generated by the camera shaking during the image capturing process, or may be generated by the camera moving during the image capturing process.
In some embodiments, obtaining the offset angle of the camera when acquiring the plurality of images may include: acquiring the time of acquiring a plurality of images to obtain a plurality of times; acquiring angular speeds corresponding to a plurality of moments of a camera; and determining the offset angle of the camera according to the angular speeds corresponding to the multiple moments.
In order to improve the accuracy of obtaining the offset angle, in the process of acquiring the image by the camera, the time corresponding to each image acquisition may be obtained, for example, the time when the camera sends a plurality of image acquisitions may be received to obtain a plurality of times, and then the angular velocity corresponding to each time of the camera may be obtained.
The angular velocity detected by the angular velocity sensor may be regarded as the angular velocity of the camera in order to improve the reliability of the angular velocity acquisition. In some embodiments, acquiring the angular velocities corresponding to the camera at the plurality of time instants may include: and acquiring the angular speed of the camera corresponding to the three-axis direction at each moment through a preset angular speed sensor.
For example, a detection instruction may be sent to a preset angular velocity sensor, the angular velocity sensor may be controlled to detect angular velocities corresponding to each time in three axial directions, such as the X axis, the Y axis, and the Z axis, to obtain an angular velocity corresponding to each time, and the angular velocity returned by the angular velocity sensor may be received, where the angular velocity corresponding to each time includes angular velocities corresponding to the time in three axial directions, such as the X axis, the Y axis, and the Z axis. For example, a set of angular velocities 1 is obtained at time a, a set of angular velocities 2 is obtained at time b, and a set of angular velocities 3 is obtained at time c. The angular velocity sensor may be an Inertial Measurement Unit (IMU), among others.
In this case, the offset angle of the camera may be determined from the angular velocities corresponding to the plurality of times, and for example, the set of angular velocities 1 obtained at the time a, the set of angular velocities 2 obtained at the time b, and the set of angular velocities 3 obtained at the time c may be integrated to obtain an angle value, which is an offset angle, and the offset angle may be used as the offset angle of the camera.
The offset angle may include an offset angle in a pitch direction (i.e., an offset angle in an X-axis direction), an offset angle in a yaw direction (i.e., an offset angle in a Y-axis direction), an offset angle in a roll direction (i.e., an offset angle in a Z-axis direction), and the like.
And S103, acquiring the coordinate positions of the target object in the plurality of images.
The collected images may include a target object, other objects, and the like, and the target object may be flexibly set according to actual needs, and specific contents are not limited herein. The target may be a complete object or a point, for example, the target may be any object such as a license plate, a person, an animal, a plant, a vehicle, a building, or a star, or even a non-solid or non-three-dimensional target such as a flame, a cloud, or a water body, as long as the target can be imaged via one or more pixel points of the camera photosensitive element.
In order to improve the accuracy of acquiring the coordinate position of the target object, each acquired image may include the target object. The determination mode of the target object can be flexibly set according to actual needs, for example, the target object can be determined by receiving a selection instruction input by a user, or the target object can be automatically identified and selected; and so on. For example, a picture acquired by a camera may be acquired, a central point of the picture may be set as the target object, or an object area where the center of the picture is located may be set as the target object.
In some embodiments, acquiring the coordinate position of the target object in the plurality of images may include: receiving a selection instruction input by a user; and selecting the target object from the plurality of images according to the selection instruction, and determining the coordinate position of the target object.
In order to improve the accuracy and flexibility of target object determination, a voice signal, a gesture, a touch operation, fingerprint information and the like input by a user can be received, a selection instruction is generated according to the voice signal, the gesture, the touch operation, the fingerprint information and the like, the voice signal, the gesture, the touch operation or the fingerprint information can be flexibly set according to actual needs, for example, the type of the voice signal can be Chinese, English and the like, and the content of the voice signal can be 'selecting an object a as a target object' or 'acquiring position information of the object a' and the like; the gesture can be a fist gesture corresponding to the selection of a license plate as a target object, a scissors gesture corresponding to the selection of scissors as a target object, and the like; the touch operation may be a click operation, and the fingerprint information may be that the fingerprint input of the thumb of the right hand of the user a corresponds to the selection of the human target object, the fingerprint input of the index finger of the right hand of the user B corresponds to the selection of the dog as the target object, and the like. An object may then be selected from within the plurality of images according to the selection instruction and a coordinate position of the object within each image, which may be a pixel coordinate of the object on the image, is determined. When the object is a whole, the center coordinates of the object may be set as the coordinate position of the object, for example, the center coordinates of the vehicle may be set as the coordinate position of the vehicle.
In some embodiments, receiving a selection instruction input by a user, selecting a target object from the plurality of images according to the selection instruction, and determining the coordinate position of the target object may include: carrying out object recognition on the multiple images to obtain an object identifier corresponding to at least one object; displaying an object list containing at least one object identifier; receiving a selection instruction input by a user based on the object list; selecting an object identifier from the object list according to the selection instruction; and determining the target object from the plurality of images according to the object identification, and determining the coordinate position of the target object.
In order to improve the flexibility and reliability of target object determination, an image acquired by a current camera can be displayed in a display screen (i.e. a display) for a user to view, and at this time, a selection instruction input by the user can be received in an interface for displaying the image, so as to determine the target object according to the selection instruction. For example, as shown in fig. 3, after the camera acquires the image, all objects in the image may be recognized by a preset recognition model one by one, a recognized object list is generated, and the object list is displayed, where the object list includes object identifiers corresponding to one or more objects, the object identifiers may be composed of numbers, letters, and/or chinese characters, and the object identifiers may be names or numbers of the objects. In fig. 3, the name of the object is taken as an example, for example, a vehicle, a license plate, a window, a wheel, a lamp, water, a tree, a road, and the like can be recognized. At this time, an operation (i.e., a selection instruction) of clicking or pressing the position of the license plate in the object list by the user may be received, then the license plate selected by the user is determined based on the position of the user's clicking or pressing operation, the area where the license plate is located in the image may be extracted as the target object, and the position where the license plate is located in the image is detected to obtain the coordinate position of the target object.
The preset recognition model may be a trained recognition model, and for example, may obtain a plurality of sample images containing objects of different types, and train the recognition model based on the plurality of sample images to obtain the trained recognition model.
In some embodiments, receiving a selection instruction input by a user, selecting a target object from the plurality of images according to the selection instruction, and determining the coordinate position of the target object may include: selecting one image from a plurality of images to display; receiving a selection instruction generated by a user based on the displayed image input touch operation; setting a touch central point of the touch operation as a target object according to the selection instruction, or setting an object area where the touch central point is located as the target object; the coordinate position of the target object is determined.
In order to improve the convenience of determining the coordinate position of the target object, the image acquired by the camera can be displayed, and the target object can be determined by receiving a selection instruction input by a user in the displayed image. For example, as shown in fig. 4, if a vehicle exists in an image acquired by a camera, an operation of clicking or pressing a license plate on the vehicle by a finger of a user (i.e., generating a selection instruction) may be received, at this time, a touch center point of the finger click or pressing (i.e., touch operation) may be set as a target object, or an area where the finger click or pressing operation (i.e., touch operation) of the user is located may be detected, an object in the area is detected, the area where the license plate is located is determined to be selected by the user, then the area where the license plate is located may be extracted as the target object, and a position where the license plate is located in the image is detected, so as to obtain a.
In some embodiments, receiving a selection instruction input by a user, selecting a target object from the plurality of images according to the selection instruction, and determining the coordinate position of the target object may include: receiving a voice signal input by a user or a selection instruction generated by a gesture; and selecting a target object corresponding to the voice signal or the gesture from the plurality of images according to the selection instruction, and determining the coordinate position of the target object.
For example, the user may input a voice signal related to the coordinate position of the acquisition target object, the type of the voice signal may be chinese, english, or the like, and the content of the voice signal may be "select object a as the target object" or "acquire position information of object a" or the like. After receiving the voice signal input by the user, a selection instruction for the target object may be generated, for example, an object a is selected as the target object from the plurality of images according to the selection instruction, and the coordinate position of the target object in the image is detected.
For example, the mapping relationship between different gestures (e.g., a fist gesture, an OK gesture, and the like) and different objects may be preset, and for example, the first gesture may correspond to a selection of a license plate as an object, a scissors gesture may correspond to a selection of scissors as an object, and an OK gesture may correspond to a selection of a vehicle as an object, and the user may input a gesture corresponding to a selection of an object, and after receiving the gesture input by the user and determining that the gesture matches a gesture of an object B present in an image (e.g., the similarity is greater than 96%), a selection instruction may be generated to select the object B as an object. At this time, the object B may be selected as the target object from among the plurality of images according to the selection instruction, and the coordinate position of the target object within the image may be detected.
In some embodiments, acquiring the coordinate position of the target object in the plurality of images may include: extracting the features of the multiple images to obtain target feature information; and identifying the target objects in the multiple images according to the target characteristic information, and determining the coordinate positions of the target objects.
In order to improve the automation and convenience of the coordinate position determination of the target object, the target object can be automatically detected. For example, feature extraction may be performed on each image acquired by the camera to obtain target feature information, and the target feature information may be flexibly set according to actual needs, for example, the target feature information may be feature information of a license plate, feature information of a certain animal, feature information of a certain plant, feature information of a certain person, or the like, and an object in the image acquired by the camera and an object region where the object is located may be identified according to the target feature information. For example, carry on cloud platform camera to unmanned aerial vehicle, after unmanned aerial vehicle takes off, can carry out the picture through cloud platform camera is automatic and patrol and examine, when gathering the image that the vehicle violated through the camera, can automatic identification in the image this violated vehicle's license plate position. For another example, as shown in fig. 5, a text input box may be displayed for a user to input a target object to be detected, at this time, a "license plate" input by the user in the text input box may be received, then, it is determined that the license plate is selected by the user based on the "license plate" input by the user, the license plate in the image may be identified by a preset identification model, an area where the license plate is located in the image is extracted as the target object, and a coordinate position of the target object in the image is detected.
And S104, determining the distance between the target object and the camera according to the coordinate position and the offset angle.
The distance between the target object and the camera is depth information between the target object and the camera. Since there may be some jitter during the process of acquiring multiple images, accurate acquisition of depth information by a single camera can be achieved based on the amount of jitter. In order to improve the accuracy of depth information acquisition, the focal lengths of the cameras corresponding to each group of images can be ensured to be consistent, so that errors caused by different zooming or focusing in one group of images are reduced, and more accurate acquisition of depth information is realized.
In some embodiments, determining the distance between the target object and the camera from the coordinate position and the offset angle may include: acquiring parameters of a camera; and determining the distance between the target object and the camera according to the parameters, the coordinate position and the offset angle.
In order to improve the accuracy of determining the distance between the target object and the camera, the distance between the target object and the camera may be determined by performing trigonometric function calculation based on the parameters of the camera, the coordinate position of the target object, the offset angle of the camera, and the like, by using the principle of trigonometry. Specifically, the focal length of the camera, the pixel size of the sensor, the pixel pitch, and the like may be acquired to obtain parameters of the camera. In this case, the distance between the object and the camera may be determined based on the parameters of the camera, the coordinate position of the object, and the offset angle of the camera, wherein the distance between the object and the camera may be determined based only on the coordinate positions of the object on any two images, or the distance between the object and the camera may be determined based on the coordinate positions of the object by averaging the coordinate positions of the object on the multiple images to obtain the coordinate position of the object.
Taking two images as an example, for example, as shown in fig. 6, the focal length f, the pixel size u and the pixel pitch d of the sensor corresponding to the two images acquired by the camera, the coordinate position (X1, y1) of the object on one of the images, the coordinate position (X2, y2) of the object on the other image, the coordinate position (X, y, z) of the object, and the offset angle a corresponding to the two images acquired by the camera may be acquired, and then the position X2' ═ X2 ═ cos a projected on the X axis by the X2 may be calculated according to the offset angle a, where the following may be obtained according to the principle of similar triangle:
Figure BDA0002983134160000111
where B represents the difference in distance between the two images of the person being captured, z represents the distance between the object and the camera, which may be the pixel distance, f represents the focal length of the camera, and x1 represents the horizontal coordinates of the object on one of the images.
The acquisition mode of B may be: in the process of acquiring images by the camera, the time corresponding to each image acquisition time may be acquired, and then the acceleration corresponding to each time of the camera may be acquired, for example, the acceleration detected by the IMU may be taken as the acceleration of the camera. In this case, the corresponding speed may be obtained according to the acceleration, and the distance may be calculated according to the speed and the time corresponding to the camera when acquiring the two images of the person, that is, the distance difference corresponding to the camera when acquiring the two images of the person may be obtained.
At this time, the actual distance (i.e., depth information) between the target object and the camera in the three-dimensional space may be determined according to the pixel size u and the pixel pitch d of the sensor, for example, the distance between the target object and the camera may be z × d + u.
It should be noted that, when the focal length f1 corresponding to the camera when acquiring the first image is not consistent with the focal length f2 corresponding to the camera when acquiring the second image, the focal length f1 of the camera corresponds to the cross coordinate of the object in the acquired first image being x2', and the focal length f2 of the camera corresponds to the cross coordinate of the object in the acquired first image being x2 ", when the method can obtain the following steps according to the similar triangle principle:
Figure BDA0002983134160000121
at this time, a distance z between the object and the camera, which may be a pixel distance, may be calculated, and then an actual distance between the object and the camera in the three-dimensional space (i.e., depth information) is determined according to a pixel size u and a pixel pitch d of the sensor.
And S105, determining the position information of the target object according to the distance between the target object and the camera.
After the distance between the target object and the camera is obtained, the position information of the target object can be determined according to the distance between the target object and the camera, and the position information can be latitude and longitude information.
In some embodiments, determining the position information of the target object according to the distance between the target object and the camera may include: acquiring a physical distance between a camera and a preset positioning device; and determining the position information of the target object according to the physical distance, the position obtained by positioning by the positioning device and the distance between the target object and the camera.
In order to improve the accuracy and reliability of the position information acquisition of the target object, a positioning device, which may be a GPS, may be preset, and the positioning device may perform positioning through its own positioning function to obtain the position. At this time, as shown in fig. 7 (fig. 7 is a two-dimensional plan view in the XY direction), a physical distance L1 between the camera and the preset positioning device, that is, a distance between the installation position of the camera and the positioning device may be acquired. Then, the position information (X2, Y2, Z2) of the camera may be determined according to the physical distance L1 between the camera and the preset positioning device and the position (X1, Y1, Z1) obtained by positioning the positioning device, where when the camera and the positioning device are on the same horizontal line, X2 may be obtained as X1+ L1, Y1 may be obtained as Y1 and Z1 may be obtained as Z1, the position information of the camera may be obtained as (X1+ L1, Y1, Z1), when the camera and the positioning device are not on the same horizontal line, as shown in fig. 8 (fig. 7 is a two-dimensional plane diagram in the XY direction), an angle θ between the camera and the positioning device may be obtained, then the angle θ between the camera and the positioning device may be obtained according to the angle θ, the physical distance L1 and the position (X1, Y1, and the position (X1+ X1) obtained by positioning device, the angle θ may be obtained by calculation, and the position information (X1+ X1, Y1 may be obtained as Y1), y1+ L1 cos (θ), Z1). Finally, the position information (X3, Y3, Z3) of the target object may be determined according to the position information (X2, Y2, Z2) of the camera and the distance L2 between the target object and the camera, for example, X3 ═ X2, Y3 ═ Y2+ L2, and Z3 ═ Z1.
In some embodiments, the relative positions of the camera and the positioning device in the position information acquisition system are fixed, so that the complexity of the position information acquisition system and the consumption of computing resources of the position information acquisition system can be reduced. In some embodiments, the relative position of the camera and the positioning device in the position information acquisition system is variable, for example, the relative position of the camera and the positioning device can be adjusted according to actual needs, which can ensure better user experience.
In some embodiments, the position information of the target object includes a plurality of pieces, and after determining the position information of the target object, the position information acquiring method may further include: forming a moving track of the target object according to the plurality of position information of the target object; acquiring the starting time and the ending time of the moving track, and determining the moving time according to the starting time and the ending time; and determining the moving speed of the target object according to the moving track and the moving time.
In order to improve the richness of the acquisition of the related information of the target object, the moving speed of the target object can be acquired in the moving process of the target object, and in order to improve the convenience of the acquisition of the moving speed of the target object, the moving speed of the target object can be determined based on the moving track of the target object formed by the plurality of pieces of position information of the target object. Specifically, the camera may acquire a plurality of images, so that each two images may be divided into one group to obtain a plurality of groups of images, and then one position information of the target object may be calculated according to each group of images in the above manner, during the movement of the target object, the plurality of groups of images may be correspondingly calculated to obtain a plurality of position information of the target object, and at this time, a connection line between the plurality of position information of the target object may form a movement track of the target object. Then, a start time corresponding to a start point (i.e., the first position information) on the movement track and an end time corresponding to an end point (i.e., the last position information) on the movement track are obtained, and finally, the movement time of the target object can be determined according to the time difference between the start time and the end time, and the movement speed v of the target object is determined to be s/t according to the movement track s of the target object and the movement time t of the target object.
In some embodiments, the location information acquiring method may further include: and acquiring the moving direction of the target object. For example, the moving direction of the positioning device can be determined, the moving direction of the positioning device can be used as the moving direction of the camera, and then the moving direction of the target object can be determined according to the moving direction of the camera and the relative position change of the target object in a plurality of images including the target object acquired by the camera.
It should be noted that, a first distance between the camera and the ground and a second distance between the camera and the target object may also be detected, and the height of the target object may be determined according to a difference between the first distance and the second distance.
In some embodiments, after determining the moving speed of the target object according to the moving trajectory and the moving time, the position information acquiring method may further include: and outputting the moving speed and the moving direction of the target object.
After the moving speed and the moving direction of the target object are obtained, the moving speed and the moving direction of the target object can be output through voice or a display screen and the like so as to facilitate the viewing of a user in need. For example, an image of the target object can be acquired by the camera, the image and the moving speed and moving direction of the target object are displayed in the display screen, or the moving speed and moving direction of the target object are sent to the mobile terminal, and the mobile terminal is controlled to output the moving speed and moving direction of the target object on the image acquired by the camera; and so on. For another example, in the security field, if a problem is found in the unmanned aerial vehicle inspection process, when the unmanned aerial vehicle inspection process needs to be linked with a ground remote control terminal, data such as position information, moving speed and moving direction of a target object can be acquired, and the data are fed back to the remote control terminal, so that the air-ground linkage efficiency can be improved greatly, and the command efficiency can be improved.
In some embodiments, outputting the moving speed and the moving orientation of the target object may include: and displaying the moving speed and the moving direction of the target object in the displayed map interface.
In order to improve the flexibility of the output of the moving speed and the moving direction, the moving speed and the moving direction of the target object may be displayed in the map interface. For example, ground information in a visual field range can be acquired through a camera loaded by the unmanned aerial vehicle, a map interface is generated according to the ground information, then the map interface can be displayed in the display interface, in the displayed map interface, the target object is identified based on the position information of the target object, the moving speed and the moving direction are displayed, for example, the target object can be displayed in a flashing manner, or the target object and the like can be marked through preset colors, and the moving speed and the moving direction of the target object can be displayed.
In some embodiments, outputting the moving speed and the moving orientation of the target object may include: and broadcasting the moving speed and the moving direction of the target object through voice.
In order to improve the convenience of outputting the moving speed and the moving direction, the moving speed and the moving direction of the target object can be broadcasted through voice, wherein the decibel size of the voice broadcast, the language (such as Chinese or English) of the voice broadcast and the like can be flexibly set according to actual needs. At the moment, the voice broadcasting device can be automatically closed after the circulation frequency reaches the preset frequency, or a user clicks a closing button to close, and the like, wherein the preset frequency can be flexibly set according to actual needs.
In some embodiments, outputting the moving speed and the moving orientation of the target object may include: and displaying the moving speed and the moving direction of the target object in a pop-up window in the display interface.
In order to improve the flexibility of outputting the moving speed and the moving direction of the target object, the moving speed and the moving direction of the target object can be displayed in a pop-up window in a display interface, wherein the size, the background color, the display position and the like of the pop-up window can be flexibly set according to actual needs. At this moment, the dialog box displayed by the popup can be automatically closed after the display time reaches the preset time, or the user clicks the closing button at the upper right corner to close, and the like, and the preset time can be flexibly set according to actual needs.
In some embodiments, outputting the moving speed and the moving orientation of the target object may include: and displaying the image collected by the camera where the target object is located in the display interface, and displaying the moving speed and the moving direction of the target object in the image.
In order to improve the diversity of position information output, an image acquired by a camera where the target object is located can be displayed in the display interface, an area where the target object is located is marked in the image, the position information of the target object is displayed in the display interface, and the moving speed and the moving direction of the target object are displayed in the image.
In some embodiments, after determining the position information of the target object, the position information acquiring method may further include: and outputting the position information of the target object.
After the position information of the target object is obtained, the position information of the target object can be output through voice or a display screen and the like in order to facilitate the viewing of a user who needs the position information. For example, an image of the target object can be acquired by the camera, the image and the position information of the target object are displayed in the display screen, or the position information of the target object is sent to the mobile terminal, and the mobile terminal is controlled to output the position information of the target object on the image acquired by the camera; and so on.
The mobile terminal may include a mobile phone, a computer, or other terminals, and in order to improve convenience and flexibility of outputting the position information, after the position information of the target object is obtained, the position information of the target object may be actively sent to the mobile terminal, for example, a control instruction carrying information such as the position information and an image of the target object may be sent to the mobile terminal, and the mobile terminal may be controlled to display the image in the display screen based on the control instruction, and output the position information of the target object through voice, or display the position information of the target object on the image acquired by the camera through the display screen. Or after obtaining the position information of the target object, the position information of the target object may be stored, and whether an acquisition request sent by the mobile terminal is received is detected, and when the acquisition request sent by the mobile terminal is received, the position information of the target object may be sent to the mobile terminal based on the acquisition request, for example, a control instruction carrying information such as the position information and the image of the target object may be sent to the mobile terminal based on the acquisition request, and the mobile terminal may be controlled to display or voice-broadcast the position information of the target object on the image acquired by the camera in the display screen based on the control instruction.
In order to improve the flexibility of outputting the position information of the target object, the position information of the target object can be sent to a preset mailbox or an instant messaging window (such as an applet, a public number, a designated QQ window, or a designated WeChat window), wherein the type of the mailbox or the type of the instant messaging and the like can be flexibly set according to actual needs.
When the camera is a specific type of camera, in addition to acquiring an image, other information of the target object, such as a distance of the target object from the camera, a temperature of the target object, or height information of the target object, may be calculated or displayed accordingly, and in this case, in addition to the position information of the target object, information of the temperature or height of the target object may be output.
The setting instruction input by the user may be received, and the output mode of the position information of the target object may be set according to the setting instruction.
In some embodiments, outputting the location information of the target object may include: and identifying the target object and displaying the position information of the target object in the displayed map interface.
In order to improve flexibility in outputting position information, position information of an object may be displayed within a map interface. For example, ground information in a visual field range can be collected through a camera loaded by the unmanned aerial vehicle, a map interface is generated according to the ground information, then the map interface can be displayed in the display interface, in the displayed map interface, the target object is identified based on the position information of the target object, for example, the target object can be displayed in a flashing manner, or the target object and the like can be marked through a preset color, and the position information of the target object is displayed.
In some embodiments, outputting the location information of the target object may include: and broadcasting the position information of the target object through voice.
In order to improve the convenience of outputting the position information, the position information of the target object can be broadcasted through voice, wherein the decibel size of the voice broadcast, the language (such as Chinese or English) of the voice broadcast and the like can be flexibly set according to actual needs. At the moment, the voice broadcasting device can be automatically closed after the circulation frequency reaches the preset frequency, or a user clicks a closing button to close, and the like, wherein the preset frequency can be flexibly set according to actual needs.
In some embodiments, outputting the location information of the target object may include: and displaying the position information of the target object in a popup window in the display interface.
In order to improve the flexibility of outputting the position information of the target object, the position information of the target object can be displayed in a popup window in a display interface, wherein the size, the background color, the display position and the like of the popup window can be flexibly set according to actual needs. For example, as shown in fig. 9, the position information of the vehicle may be displayed in a pop-up window as (X, Y). At this moment, the dialog box displayed by the popup can be automatically closed after the display time reaches the preset time, or the user clicks the closing button at the upper right corner to close, and the like, and the preset time can be flexibly set according to actual needs.
In some embodiments, outputting the location information of the target object may include: and displaying the image acquired by the camera where the target object is located in the display interface, marking the area where the target object is located in the image, and displaying the position information of the target object in the display interface.
In order to improve the diversity of position information output, the image acquired by the camera where the target object is located can be displayed in the display interface, the area where the target object is located is marked in the image, and the position information of the target object can be displayed in the area below or above the display interface. Or, the position information of the target object may be sent to a peripheral display device (e.g., a mobile phone or a computer), and the display device is controlled to display the image acquired by the camera in the display screen, and mark the area where the target object is located in the image, and display the position information of the target object in the area below or above the display interface.
In some embodiments, labeling the region in which the target object is located within the image may include: determining the central position of the target object in the image; and drawing a polygon or a circle circumscribed with the target object according to the central position so as to distinguish the target color of the background color of the target object in the image, setting the color of the polygon or the circle, and obtaining the area where the target object is located.
In order to prominently display the area where the target object is located so that the user can quickly view the target object, for example, as shown in fig. 10, the center position of the target object, such as the center position of a ball, may be determined within the image, a polygon or a circle circumscribed with the target object may be drawn according to the center position of the target object, and a quadrangle circumscribed with the ball may be drawn in fig. 10, and the area where the target object is located may be obtained by setting the target color different from the background color of the target object within the image to the color of the polygon or the circle, where the target color may be flexibly set according to actual needs.
In practical application, the fingerprint information of the user viewing the position information at present can be collected, and the identity information of the user is determined according to the fingerprint information, or the face image of the user viewing the position information at present is collected through a camera, and the identity information of the user is determined through the face image. After the identity information of the user is determined, the usage habit (for example, the habit of marking the target object with red) or the characteristic (for example, whether the user is anerythrochloropsia or not) of the user can be determined according to the identity information of the user, and at this time, the color of the marking target object can be determined according to the usage habit or the characteristic of the user.
In some embodiments, labeling the region in which the target object is located within the image may include: extracting the outline of the target object from the image; and marking the area where the target object is located with a preset color according to the outline.
In order to improve the labeling accuracy of the target object and highlight the area where the target object is located, so that the user can quickly view the target object, for example, as shown in fig. 11, the outline of the target object may be extracted from the image, as shown in fig. 11, the outline of the ball may be extracted, and the area where the target object is located may be labeled with a preset color according to the outline, where the preset color may be flexibly set according to actual needs. The color setting instruction input by a user can be received in the setting interface, and the target color or the preset color can be set according to the color setting instruction.
It should be noted that the position information acquiring method may further include: receiving a recording instruction, a shooting instruction, a distance measuring instruction or a temperature measuring instruction; and controlling the camera to record the acquired picture of the target object according to the recording instruction, or controlling the camera to shoot the acquired picture of the target object according to the shooting instruction, or controlling the camera to measure the distance of the target object according to the distance measuring instruction, or controlling the camera to measure the temperature of the target object according to the temperature measuring instruction.
In some embodiments, after acquiring the offset angles of the camera when acquiring the plurality of images, the position information acquiring method may further include: acquiring the picture jitter amount between two adjacent images according to the offset angle; and sequentially displaying a plurality of images according to the acquired time sequence, and correcting the displayed images according to the picture jitter amount.
In order to ensure the stability of image display, after a plurality of images are acquired and the offset angle of the camera during the acquisition of the plurality of images is determined, the offset angle of the camera during the acquisition of two adjacent images can be used as the picture jitter amount between the two adjacent images, wherein the picture jitter amount can be a vector value with a direction, and then in the process of sequentially displaying the plurality of images according to the time sequence of image acquisition, the displayed images can be corrected based on the picture jitter amount, that is, the images are moved according to the picture jitter amount. For example, the camera sequentially acquires a first image, a second image and a third image in time sequence, a first offset angle corresponding to when the camera acquires the first image and the second image may be determined in the above manner, a second offset angle corresponding to when the camera acquires the second image and the third image may be determined, the first offset angle is used as a picture-shaking amount between the first image and the second image, and the second offset angle is used as a picture-shaking amount between the second image and the third image. At this time, when the second picture is displayed, the display position of the second picture may be corrected according to the amount of picture-shake between the first picture and the second picture, and when the third picture is displayed, the display position of the third picture may be corrected according to the amount of picture-shake between the second picture and the third picture, for example, the third picture is moved to the left, right, upward, or downward, or the like.
In order to improve the correction accuracy, the frame rate of image acquisition may be increased to increase the image correction frequency, reduce the amount of image jitter, and improve the stability of image display.
The embodiment of the application can acquire a plurality of images through the camera, acquire the offset angle of the camera when acquiring the plurality of images, acquire the coordinate position of the target object in the plurality of images, and then determine the distance between the target object and the camera according to the coordinate position and the offset angle, and at the moment, determine the position information of the target object according to the distance between the target object and the camera. According to the scheme, the position information of the target object is acquired through the single camera, the cost is reduced, the position information of the target object is accurately acquired based on the offset angle, the distance between the target object and the camera and the like, and the accuracy of acquiring the position information is improved.
Referring to fig. 12, fig. 12 is a schematic block diagram of a location information acquiring system according to an embodiment of the present application. The position information acquiring system 11 may include a processor 111 and a memory 112, and the processor 111 and the memory 112 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 111 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 112 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk, and may be used to store a computer program.
The processor 111 is configured to call a computer program stored in the memory 112, and when executing the computer program, implement the location information obtaining method provided in the embodiment of the present application, for example, the following steps may be executed:
acquiring a plurality of images through a camera, and acquiring offset angles of the camera when the camera acquires the plurality of images; acquiring coordinate positions of the target object in the plurality of images, and determining the distance between the target object and the camera according to the coordinate positions and the offset angle; and determining the position information of the target object according to the distance between the target object and the camera.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the position information obtaining method, and are not described herein again.
Referring to fig. 13, fig. 13 is a schematic block diagram of a remote control terminal according to an embodiment of the present application. The remote control terminal 12 may include a processor 121 and a memory 122, and the processor 121 and the memory 122 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 121 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 122 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk, and may be used to store a computer program.
The remote control terminal 12 may further include a display 123 for displaying an image, position information of the target object, and the like, and the display 123 may further display other information such as a moving speed of the target object and a moving direction of the target object, and specific contents are not limited herein.
The processor 121 is configured to call a computer program stored in the memory 122, and when the computer program is executed, implement the location information obtaining method provided in the embodiment of the present application, for example, the following steps may be executed:
acquiring a plurality of images through a camera, and acquiring offset angles of the camera when the camera acquires the plurality of images; acquiring coordinate positions of the target object in the plurality of images, and determining the distance between the target object and the camera according to the coordinate positions and the offset angle; and determining the position information of the target object according to the distance between the target object and the camera. The position information of the target object may also be displayed through the display 123.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the position information obtaining method, and are not described herein again.
Referring to fig. 14, fig. 14 is a schematic block diagram of a pan-tilt camera according to an embodiment of the present application. The pan/tilt/zoom camera 13 may include a processor 131 and a memory 132, and the processor 131 and the memory 132 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 131 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 132 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk, and may be used to store a computer program.
The pan/tilt/zoom camera 13 may further include a camera 133, a pan/tilt 134, and the like, wherein the camera 133 captures an image including a target object, and the pan/tilt 134 is used for carrying the camera 133 to drive the camera 133 to move to a proper position, and accurately capture a required image.
The processor 131 is configured to call a computer program stored in the memory 132, and when the computer program is executed, implement the location information obtaining method provided in the embodiment of the present application, for example, the following steps may be executed:
acquiring a plurality of images through a camera, and acquiring offset angles of the camera when the camera acquires the plurality of images; acquiring coordinate positions of the target object in the plurality of images, and determining the distance between the target object and the camera according to the coordinate positions and the offset angle; and determining the position information of the target object according to the distance between the target object and the camera.
In some embodiments, in acquiring the offset angles of the camera when acquiring the plurality of images, the processor 131 further performs: acquiring the time of acquiring a plurality of images to obtain a plurality of times; acquiring angular speeds corresponding to a plurality of moments of a camera; and determining the offset angle of the camera according to the angular speeds corresponding to the multiple moments.
In some embodiments, in acquiring the angular velocities corresponding to the camera at a plurality of time instants, the processor 131 further performs: and acquiring the angular speed of the camera corresponding to the three-axis direction at each moment through a preset angular speed sensor.
In some embodiments, in acquiring the coordinate position of the target object in the plurality of images, the processor 131 further performs: receiving a selection instruction input by a user; and selecting the target object from the plurality of images according to the selection instruction, and determining the coordinate position of the target object.
In some embodiments, when receiving a selection instruction input by a user, selecting an object from the plurality of images according to the selection instruction, and determining a coordinate position of the object, the processor 131 further performs: carrying out object recognition on the multiple images to obtain an object identifier corresponding to at least one object; displaying an object list containing at least one object identifier; receiving a selection instruction input by a user based on the object list; selecting an object identifier from the object list according to the selection instruction; and determining the target object from the plurality of images according to the object identification, and determining the coordinate position of the target object.
In some embodiments, when receiving a selection instruction input by a user, selecting an object from the plurality of images according to the selection instruction, and determining a coordinate position of the object, the processor 131 further performs: selecting one image from a plurality of images to display; receiving a selection instruction generated by a user based on the displayed image input touch operation; setting a touch central point of the touch operation as a target object according to the selection instruction, or setting an object area where the touch central point is located as the target object; the coordinate position of the target object is determined.
In some embodiments, when receiving a selection instruction input by a user, selecting an object from the plurality of images according to the selection instruction, and determining a coordinate position of the object, the processor 131 further performs: receiving a voice signal input by a user or a selection instruction generated by a gesture; and selecting a target object corresponding to the voice signal or the gesture from the plurality of images according to the selection instruction, and determining the coordinate position of the target object.
In some embodiments, in acquiring the coordinate position of the target object in the plurality of images, the processor 131 further performs: extracting the features of the multiple images to obtain target feature information; and identifying the target objects in the multiple images according to the target characteristic information, and determining the coordinate positions of the target objects.
In some embodiments, in determining the distance between the target object and the camera according to the coordinate position and the offset angle, the processor 131 further performs: acquiring parameters of a camera; and determining the distance between the target object and the camera according to the parameters, the coordinate position and the offset angle.
In some embodiments, in acquiring the parameters of the camera, the processor 131 further performs: and acquiring the focal length of the camera, the pixel size of the sensor and the pixel interval to obtain the parameters of the camera.
In some embodiments, when determining the position information of the object according to the distance between the object and the camera, the processor 131 further performs: acquiring a physical distance between a camera and a preset positioning device; and determining the position information of the target object according to the physical distance, the position obtained by positioning by the positioning device and the distance between the target object and the camera.
In some embodiments, the position information of the target object includes a plurality of information, and after determining the position information of the target object, the processor 131 further performs: forming a moving track of the target object according to the plurality of position information of the target object; acquiring the starting time and the ending time of the moving track, and determining the moving time according to the starting time and the ending time; and determining the moving speed of the target object according to the moving track and the moving time.
In some embodiments, processor 131 further performs: and acquiring the moving direction of the target object.
In some embodiments, after determining the moving speed of the target object according to the moving track and the moving time, the processor 131 further performs: and outputting the moving speed and the moving direction of the target object.
In some embodiments, when outputting the moving speed and the moving direction of the target object, the processor 131 further performs: displaying the moving speed and the moving direction of the target object in the displayed map interface; broadcasting the moving speed and the moving direction of the target object through voice; or, popup displays the moving speed and moving direction of the target object in the display interface; or displaying an image acquired by a camera where the target object is located in the display interface, and displaying the moving speed and the moving direction of the target object in the image.
In some embodiments, after determining the location information of the target object, the processor 131 further performs: and outputting the position information of the target object.
In some embodiments, when outputting the position information of the target object, the processor 131 further performs: and identifying the target object and displaying the position information of the target object in the displayed map interface.
In some embodiments, when outputting the position information of the target object, the processor 131 further performs: broadcasting the position information of the target object through voice; or, popup window displays the position information of the target object in the display interface; or displaying an image acquired by a camera where the target object is located in the display interface, labeling an area where the target object is located in the image, and displaying the position information of the target object in the display interface.
In some embodiments, when labeling the region in which the target object is located within the image, the processor 131 further performs: determining the central position of the target object in the image; and drawing a polygon or a circle circumscribed with the target object according to the central position so as to distinguish the target color of the background color of the target object in the image, setting the color of the polygon or the circle, and obtaining the area where the target object is located.
In some embodiments, when labeling the region in which the target object is located within the image, the processor 131 further performs: extracting the outline of the target object from the image; and marking the area where the target object is located with a preset color according to the outline.
In some embodiments, after acquiring the offset angles of the camera when acquiring the plurality of images, the processor 131 further performs: acquiring the picture jitter amount between two adjacent images according to the offset angle; and sequentially displaying a plurality of images according to the acquired time sequence, and correcting the displayed images according to the picture jitter amount.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the position information obtaining method, and are not described herein again.
Referring to fig. 15, fig. 15 is a schematic block diagram of a movable platform according to an embodiment of the present application. The movable platform 14 may include a processor 141 and a memory 142, the processor 141 and the memory 142 being connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 141 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 142 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk, and may be used to store a computer program.
The movable platform 14 may further include a camera 143, and the like, the camera 143 collects an image including a target object, and the movable platform 14 may further include a pan-tilt for carrying the camera, and the pan-tilt may drive the camera 133 to move to a suitable position, accurately collect a desired image, and the like. The type of the movable platform 14 can be flexibly set according to actual needs, for example, the movable platform 14 can be a mobile terminal, a drone, a robot or a vehicle, and the vehicle can be an unmanned vehicle.
The processor 141 is configured to call a computer program stored in the memory 142, and when the computer program is executed, implement the location information obtaining method provided in the embodiment of the present application, for example, the following steps may be executed:
acquiring a plurality of images through a camera, and acquiring offset angles of the camera when the camera acquires the plurality of images; acquiring coordinate positions of the target object in the plurality of images, and determining the distance between the target object and the camera according to the coordinate positions and the offset angle; and determining the position information of the target object according to the distance between the target object and the camera.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the position information obtaining method, and are not described herein again.
The embodiment of the present application further provides a computer program, where the computer program includes program instructions, and a processor executes the program instructions to implement the position information obtaining method provided in the embodiment of the present application.
In an embodiment of the present application, a storage medium is provided, where the storage medium is a computer-readable storage medium, and a computer program is stored in the storage medium, where the computer program includes program instructions, and a processor executes the program instructions, so as to implement the position information obtaining method provided in the embodiment of the present application.
The storage medium may be an internal storage unit of the unmanned aerial vehicle state management system or the remote control terminal described in any of the foregoing embodiments, for example, a hard disk or a memory of the remote control terminal. The storage medium may also be an external storage device of the remote control terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the remote control terminal.
Since the computer program stored in the storage medium can execute any position information obtaining method provided in the embodiments of the present application, beneficial effects that can be achieved by any position information obtaining method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted herein for the details of the foregoing embodiments.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (27)

1. A position information acquisition method, comprising:
acquiring a plurality of images through a camera;
acquiring offset angles of the camera when the plurality of images are acquired;
acquiring coordinate positions of the target objects in the multiple images;
determining a distance between the target object and the camera according to the coordinate position and the offset angle;
and determining the position information of the target object according to the distance between the target object and the camera.
2. The position information acquisition method according to claim 1, wherein the acquiring offset angles of the camera when acquiring the plurality of images includes:
acquiring the time of acquiring the plurality of images to obtain a plurality of times;
acquiring angular speeds corresponding to the plurality of moments of the camera;
and determining the offset angle of the camera according to the angular speeds corresponding to the moments.
3. The position information acquisition method according to claim 2, wherein said acquiring angular velocities corresponding to the plurality of time instants of the camera comprises:
and acquiring the angular speed of the camera corresponding to the three-axis direction at each moment through a preset angular speed sensor.
4. The position information acquiring method according to claim 1, wherein the acquiring the coordinate position of the object in the plurality of images includes:
receiving a selection instruction input by a user;
and selecting a target object from the plurality of images according to the selection instruction, and determining the coordinate position of the target object.
5. The method according to claim 4, wherein the receiving a selection instruction input by a user, selecting an object from the plurality of images according to the selection instruction, and determining the coordinate position of the object includes:
carrying out object recognition on the multiple images to obtain an object identifier corresponding to at least one object;
displaying an object list containing at least one object identifier;
receiving a selection instruction input by a user based on the object list;
selecting an object identifier from the object list according to the selection instruction;
and determining a target object from the plurality of images according to the object identification, and determining the coordinate position of the target object.
6. The method according to claim 4, wherein the receiving a selection instruction input by a user, selecting an object from the plurality of images according to the selection instruction, and determining the coordinate position of the object includes:
selecting one image from the plurality of images to display;
receiving a selection instruction generated by a user based on the displayed image input touch operation;
setting the touch center point of the touch operation as a target object according to the selection instruction, or setting the object area where the touch center point is located as the target object;
and determining the coordinate position of the target object.
7. The method according to claim 4, wherein the receiving a selection instruction input by a user, selecting an object from the plurality of images according to the selection instruction, and determining the coordinate position of the object includes:
receiving a voice signal input by a user or a selection instruction generated by a gesture;
and selecting a target object corresponding to the voice signal or the gesture from the plurality of images according to the selection instruction, and determining the coordinate position of the target object.
8. The position information acquiring method according to claim 1, wherein the acquiring the coordinate position of the object in the plurality of images includes:
extracting the features of the multiple images to obtain target feature information;
and identifying the target objects in the plurality of images according to the target characteristic information, and determining the coordinate positions of the target objects.
9. The position information acquisition method according to claim 1, wherein the determining a distance between the target object and the camera from the coordinate position and the offset angle includes:
acquiring parameters of the camera;
determining a distance between the target object and the camera according to the parameter, the coordinate position, and the offset angle.
10. The position information acquisition method according to claim 9, wherein the acquiring the parameter of the camera includes:
and acquiring the focal length of the camera, the pixel size of the sensor and the pixel distance to obtain the parameters of the camera.
11. The method according to claim 1, wherein the determining the position information of the object based on the distance between the object and the camera includes:
acquiring a physical distance between the camera and a preset positioning device;
and determining the position information of the target object according to the physical distance, the position obtained by positioning by the positioning device and the distance between the target object and the camera.
12. The method according to any one of claims 1 to 11, wherein the position information of the target object includes a plurality of pieces, and after the position information of the target object is determined, the method further includes:
forming a moving track of the target object according to the plurality of position information of the target object;
acquiring the starting time and the ending time of the moving track, and determining the moving time according to the starting time and the ending time;
and determining the moving speed of the target object according to the moving track and the moving time.
13. The position information acquisition method according to claim 12, characterized by further comprising:
and acquiring the moving direction of the target object.
14. The position information acquisition method according to claim 13, wherein after determining the moving speed of the target object based on the moving trajectory and the moving time, the position information acquisition method further comprises:
and outputting the moving speed and the moving direction of the target object.
15. The position information acquisition method according to claim 14, wherein the outputting the moving speed and the moving orientation of the target object includes:
displaying the moving speed and the moving direction of the target object in a displayed map interface;
broadcasting the moving speed and the moving direction of the target object through voice; alternatively, the first and second electrodes may be,
displaying the moving speed and the moving direction of the target object in a pop-up window in a display interface; alternatively, the first and second electrodes may be,
and displaying the image acquired by the camera where the target object is located in a display interface, and displaying the moving speed and the moving direction of the target object in the image.
16. The position information acquisition method according to any one of claims 1 to 11, wherein after the position information of the target object is determined, the position information acquisition method further comprises:
and outputting the position information of the target object.
17. The position information acquisition method according to claim 16, wherein the outputting the position information of the target object includes:
and identifying the target object and displaying the position information of the target object in the displayed map interface.
18. The position information acquisition method according to claim 16, wherein the outputting the position information of the target object includes:
broadcasting the position information of the target object through voice; alternatively, the first and second electrodes may be,
displaying the position information of the target object in a pop-up window in a display interface; alternatively, the first and second electrodes may be,
displaying the image collected by the camera where the target object is located in a display interface, marking the area where the target object is located in the image, and displaying the position information of the target object in the display interface.
19. The position information acquisition method according to claim 18, wherein the labeling of the region in which the target object is located within the image includes:
determining a center position of the target object within the image;
and drawing a polygon or a circle circumscribed with the target object according to the central position so as to set the target color different from the background color of the target object in the image as the color of the polygon or the circle, thereby obtaining the area where the target object is located.
20. The position information acquisition method according to claim 18, wherein the labeling of the region in which the target object is located within the image includes:
extracting the contour of the target object from the image;
and marking the area where the target object is located with a preset color according to the outline.
21. The position information acquisition method according to any one of claims 1 to 11, wherein after the acquiring the offset angles of the camera at the time of acquiring the plurality of images, the position information acquisition method further comprises:
acquiring the picture jitter amount between two adjacent images according to the offset angle;
and sequentially displaying the plurality of images according to the acquired time sequence, and correcting the displayed images according to the picture jitter amount.
22. A position information acquisition system characterized by comprising:
a memory for storing a computer program;
a processor for invoking a computer program in said memory to perform the location information acquisition method of any one of claims 1 to 21.
23. A remote control terminal, comprising:
a display for displaying an image and position information of the target object;
a memory for storing a computer program;
a processor for invoking a computer program in said memory to perform the location information acquisition method of any one of claims 1 to 21.
24. A pan-tilt camera, comprising:
a camera for capturing an image;
the holder is used for carrying the camera;
a memory for storing a computer program;
a processor for invoking a computer program in said memory to perform the location information acquisition method of any one of claims 1 to 21.
25. A movable platform, comprising:
a camera for acquiring a plurality of images;
a memory for storing a computer program;
a processor for invoking a computer program in said memory to perform the location information acquisition method of any one of claims 1 to 21.
26. The movable platform of claim 25, wherein the movable platform is a mobile terminal, a drone, a robot, or a vehicle.
27. A computer-readable storage medium for storing a computer program which is loaded by a processor to execute the position information acquisition method according to any one of claims 1 to 21.
CN202080005236.8A 2020-05-06 2020-05-06 Position information acquisition method, device and storage medium Pending CN112771576A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/088843 WO2021223124A1 (en) 2020-05-06 2020-05-06 Position information obtaining method and device, and storage medium

Publications (1)

Publication Number Publication Date
CN112771576A true CN112771576A (en) 2021-05-07

Family

ID=75699519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080005236.8A Pending CN112771576A (en) 2020-05-06 2020-05-06 Position information acquisition method, device and storage medium

Country Status (2)

Country Link
CN (1) CN112771576A (en)
WO (1) WO2021223124A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192139A (en) * 2021-05-14 2021-07-30 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
WO2023060569A1 (en) * 2021-10-15 2023-04-20 深圳市大疆创新科技有限公司 Photographing control method, photographing control apparatus, and movable platform

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327058B (en) * 2021-12-24 2023-11-10 海信集团控股股份有限公司 Display apparatus
CN114564014A (en) * 2022-02-23 2022-05-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device
CN114627186A (en) * 2022-03-16 2022-06-14 杭州浮点智能信息技术有限公司 Distance measuring method and distance measuring device
CN115291624B (en) * 2022-07-11 2023-11-28 广州中科云图智能科技有限公司 Unmanned aerial vehicle positioning landing method, storage medium and computer equipment
CN115359447B (en) * 2022-08-01 2023-06-20 浙江有色地球物理技术应用研究院有限公司 Highway tunnel remote monitoring system
CN115409888B (en) * 2022-08-22 2023-11-17 北京御航智能科技有限公司 Intelligent positioning method and device for pole tower in inspection of distribution network unmanned aerial vehicle
CN117308967B (en) * 2023-11-30 2024-02-02 中船(北京)智能装备科技有限公司 Method, device and equipment for determining target object position information

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
CN105376484A (en) * 2015-11-04 2016-03-02 深圳市金立通信设备有限公司 Image processing method and terminal
CN107025666A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Depth detection method and device and electronic installation based on single camera
WO2018056802A1 (en) * 2016-09-21 2018-03-29 Universiti Putra Malaysia A method for estimating three-dimensional depth value from two-dimensional images
CN109664301A (en) * 2019-01-17 2019-04-23 中国石油大学(北京) Method for inspecting, device, equipment and computer readable storage medium
CN110849285A (en) * 2019-11-20 2020-02-28 上海交通大学 Welding spot depth measuring method, system and medium based on monocular camera

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130201326A1 (en) * 2012-01-23 2013-08-08 Hiroshi Tsujii Single camera image processing apparatus, method, and program
JP6073474B2 (en) * 2013-06-28 2017-02-01 シャープ株式会社 Position detection device
CN105588543B (en) * 2014-10-22 2019-10-18 中兴通讯股份有限公司 A kind of method, apparatus and positioning system for realizing positioning based on camera
CN105959529B (en) * 2016-04-22 2018-12-21 首都师范大学 It is a kind of single as method for self-locating and system based on panorama camera
CN106570903B (en) * 2016-10-13 2019-06-18 华南理工大学 A kind of visual identity and localization method based on RGB-D camera
CN108663043B (en) * 2018-05-16 2020-01-10 北京航空航天大学 Single-camera-assisted distributed POS main node and sub node relative pose measurement method
CN110929567B (en) * 2019-10-17 2022-09-27 北京全路通信信号研究设计院集团有限公司 Monocular camera monitoring scene-based target position and speed measuring method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
CN105376484A (en) * 2015-11-04 2016-03-02 深圳市金立通信设备有限公司 Image processing method and terminal
WO2018056802A1 (en) * 2016-09-21 2018-03-29 Universiti Putra Malaysia A method for estimating three-dimensional depth value from two-dimensional images
CN107025666A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Depth detection method and device and electronic installation based on single camera
CN109664301A (en) * 2019-01-17 2019-04-23 中国石油大学(北京) Method for inspecting, device, equipment and computer readable storage medium
CN110849285A (en) * 2019-11-20 2020-02-28 上海交通大学 Welding spot depth measuring method, system and medium based on monocular camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192139A (en) * 2021-05-14 2021-07-30 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
WO2023060569A1 (en) * 2021-10-15 2023-04-20 深圳市大疆创新科技有限公司 Photographing control method, photographing control apparatus, and movable platform

Also Published As

Publication number Publication date
WO2021223124A1 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
WO2021223124A1 (en) Position information obtaining method and device, and storage medium
CN112567201B (en) Distance measuring method and device
EP3407294B1 (en) Information processing method, device, and terminal
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
CN108419446A (en) System and method for the sampling of laser depth map
US20180190014A1 (en) Collaborative multi sensor system for site exploitation
CN110147094A (en) A kind of vehicle positioning method and car-mounted terminal based on vehicle-mounted viewing system
CN112955711A (en) Position information determining method, apparatus and storage medium
CN107665505B (en) Method and device for realizing augmented reality based on plane detection
CN105955308A (en) Aircraft control method and device
CN108475442A (en) Augmented reality method, processor and unmanned plane for unmanned plane
CN112729327A (en) Navigation method, navigation device, computer equipment and storage medium
JP2020534198A (en) Control methods, equipment and systems for mobile objects
CN112348886B (en) Visual positioning method, terminal and server
CN106289180A (en) The computational methods of movement locus and device, terminal
CN113228103A (en) Target tracking method, device, unmanned aerial vehicle, system and readable storage medium
CN112270702A (en) Volume measurement method and device, computer readable medium and electronic equipment
CN113167577A (en) Surveying method for a movable platform, movable platform and storage medium
CN117036989A (en) Miniature unmanned aerial vehicle target recognition and tracking control method based on computer vision
CN111699453A (en) Control method, device and equipment of movable platform and storage medium
WO2020113417A1 (en) Three-dimensional reconstruction method and system for target scene, and unmanned aerial vehicle
CN116203976A (en) Indoor inspection method and device for transformer substation, unmanned aerial vehicle and storage medium
CN113311855B (en) Aircraft monitoring method and device, computer storage medium and computer device
WO2021138856A1 (en) Camera control method, device, and computer readable storage medium
CN113807282A (en) Data processing method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination