WO2021223124A1 - 位置信息获取方法、设备及存储介质 - Google Patents

位置信息获取方法、设备及存储介质 Download PDF

Info

Publication number
WO2021223124A1
WO2021223124A1 PCT/CN2020/088843 CN2020088843W WO2021223124A1 WO 2021223124 A1 WO2021223124 A1 WO 2021223124A1 CN 2020088843 W CN2020088843 W CN 2020088843W WO 2021223124 A1 WO2021223124 A1 WO 2021223124A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
camera
acquiring
position information
location information
Prior art date
Application number
PCT/CN2020/088843
Other languages
English (en)
French (fr)
Inventor
周琦
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/088843 priority Critical patent/WO2021223124A1/zh
Priority to CN202080005236.8A priority patent/CN112771576A/zh
Publication of WO2021223124A1 publication Critical patent/WO2021223124A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This application relates to the field of data processing technology, and in particular to a method, device and storage medium for obtaining location information.
  • the existing target location method generally collects images through at least two pre-set cameras of the drone, and determines the target location according to the pixel position relationship in multiple images. Take dual cameras as an example. At this time, there needs to be a certain distance between the dual cameras, so that the size of the drone cannot be made small. When the size of the drone is small, the dual cameras cannot be loaded, making the cost of the drone that achieves the target position of the dual cameras. high. And the relative position between the dual cameras is fixed, and the relative position between the dual cameras needs to be calibrated before leaving the factory and cannot be changed after leaving the factory.
  • the dual camera needs to be corrected again, and if the binocular position changes If it is too large, it may not be corrected, which brings inconvenience to use and makes the accuracy of the obtained target position low.
  • the low resolution of the binocular camera and the short detection distance reduce the accuracy of acquiring the target position.
  • the embodiments of the present application provide a method, device, and storage medium for acquiring location information, which can improve the accuracy of acquiring location information.
  • an embodiment of the present application provides a method for acquiring location information, including:
  • the location information of the target is determined according to the distance between the target and the camera.
  • an embodiment of the present application also provides a location information acquisition system, including:
  • Memory used to store computer programs
  • the processor is configured to call a computer program in the memory to execute any location information acquisition method provided in the embodiments of the present application.
  • an embodiment of the present application also provides a remote control terminal, including:
  • the processor is configured to call a computer program in the memory to execute any location information acquisition method provided in the embodiments of the present application.
  • an embodiment of the present application also provides a pan-tilt camera, including:
  • Memory used to store computer programs
  • the processor is configured to call a computer program in the memory to execute any location information acquisition method provided in the embodiments of the present application.
  • an embodiment of the present application also provides a movable platform, including:
  • Camera used to collect multiple images
  • Memory used to store computer programs
  • the processor is configured to call a computer program in the memory to execute any location information acquisition method provided in the embodiments of the present application.
  • an embodiment of the present application also provides a storage medium, the storage medium is used to store a computer program, and the computer program is loaded by a processor to execute any of the location information acquisition methods provided in the embodiments of the present application .
  • an embodiment of the present application also provides a computer program that is loaded by a processor to execute any location information acquisition method provided in the embodiments of the present application.
  • multiple images can be collected by a camera, the offset angle of the camera when multiple images are collected, and the coordinate position of the target object in the multiple images can be obtained, and then the target object can be determined according to the coordinate position and the offset angle The distance to the camera.
  • the position information of the target can be determined according to the distance between the target and the camera.
  • the solution uses a single camera to obtain the location information of the target, which reduces the cost, and accurately obtains the location information of the target based on the offset angle and the distance between the target and the camera, which improves the accuracy of the location information acquisition. .
  • FIG. 1 is a schematic diagram of an application scenario of a location information acquisition method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for acquiring location information provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of selecting a target object from an object list provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of selecting a target from an image provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram based on automatic identification of a target provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of determining the distance between the target and the camera according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of determining location information of a target provided by an embodiment of the present application.
  • FIG. 8 is another schematic diagram of determining location information of a target provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a pop-up window provided by an embodiment of the present application displaying position information of a target;
  • FIG. 10 is a schematic diagram of marking a target and displaying location information of the target provided by an embodiment of the present application.
  • FIG. 11 is another schematic diagram of marking a target and displaying position information of the target provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a location information acquisition system provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a remote control terminal provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the structure of a pan-tilt camera provided by an embodiment of the present application.
  • Fig. 15 is a schematic structural diagram of a movable platform provided by an embodiment of the present application.
  • the embodiments of the present application provide a method, device and storage medium for acquiring position information, which are used to determine the distance between the target and the camera based on the offset angle when multiple images are collected and the coordinate position of the target, and The location information of the target is determined according to the distance between the target and the camera, which improves the accuracy of obtaining the location information of the target.
  • the storage medium is a computer-readable storage medium
  • the device may include a position information acquisition system, a remote control terminal, a pan/tilt camera, and a movable platform, etc., and the position information acquisition system, a remote control terminal, a pan/tilt camera, and a movable platform
  • the location information acquisition system may include a camera
  • the location information acquisition system may include a drone equipped with a camera and a remote control terminal for controlling the drone.
  • the remote control terminal may be a remote control device provided with a display and control buttons, etc., used to establish a communication connection with a movable platform and to control the movable platform.
  • the display may be used to display images and display location information of the target object, etc. .
  • the remote control terminal can also be a third-party mobile phone or tablet computer, etc., which establish a communication connection with the movable platform through a preset protocol, and control the movable platform.
  • the pan/tilt camera may include a camera, a pan/tilt, etc.
  • the pan/tilt may include a pivot arm, etc.
  • the pivot arm can drive the camera to move, for example, the pivot arm controls the camera to move to a suitable position, so that images containing the target can be captured by the camera .
  • the camera may be a monocular camera, and the type of the camera may be an ultra-wide-angle camera, a wide-angle camera, a telephoto camera (that is, a zoom camera), an infrared camera, a far-infrared camera, an ultraviolet camera, and a time-of-flight (TOF, Time of Flight) Depth Camera (TOF Depth Camera for short), etc.
  • the movable platform may include a pan/tilt, a platform body, a camera, etc.
  • the platform body may be used to mount a pan/tilt, and the pan/tilt may mount a camera, so that the pan/tilt can drive the camera to move.
  • the type of the movable platform can be flexibly set according to actual needs.
  • the movable platform may be a mobile terminal, drone, robot, or vehicle, etc., and the vehicle may be an unmanned vehicle.
  • the drone may include a camera, a distance measuring device, an obstacle sensing device, and so on.
  • the unmanned aerial vehicle may also include a pan/tilt for carrying a camera, and the pan/tilt may drive the camera to a suitable position so that the required image can be collected by the camera.
  • the drone can include a rotary-wing drone (such as a quad-rotor drone, a hexa-rotor drone, or an eight-rotor drone, etc.), a fixed-wing drone, or a rotary-wing and fixed-wing drone The combination of is not limited here.
  • the mobile platform can also be provided with positioning devices such as the Global Positioning System (GPS).
  • GPS Global Positioning System
  • the positional relationship between the camera and the positioning device can be on the same plane. In this plane, the camera and the positioning device can be on the same plane. On a straight line, or form a preset angle, etc.; of course, the camera and the positioning device can also be located on different planes, and the position relationship between the two can be converted at this time.
  • GPS Global Positioning System
  • FIG. 1 is a schematic diagram of a scene for implementing the location information acquisition method provided by the embodiment of the present application.
  • FIG. 100 can be used to control the flight of the UAV 200 or perform corresponding actions, and obtain corresponding motion information from the UAV 200.
  • the motion information may include flight direction, flight attitude, flight height, flight speed and position information, etc.
  • the obtained exercise information is sent to the remote control terminal 100, and the remote control terminal 100 performs analysis and display.
  • the remote control terminal 100 may also receive control instructions input by the user, and perform corresponding control on the distance measuring device or camera on the drone 200 based on the control instructions.
  • the remote control terminal 100 may receive a shooting instruction or a distance measurement instruction input by a user, and send the shooting instruction or a distance measurement instruction to the drone 200, and the drone 200 can control the camera to shoot the captured image according to the shooting instruction. Or control the distance measuring device to measure the distance of the target according to the distance measuring instruction.
  • the obstacle sensing device of the UAV 200 can obtain the sensing signals around the UAV 200, and by analyzing the sensing signals, the obstacle information can be obtained and displayed on the display of the UAV 200.
  • the obstacle information is displayed inside, so that the user can learn the obstacles sensed by the drone 200, and it is convenient for the user to control the drone 200 to avoid the obstacles.
  • the display may be a liquid crystal display, or a touch screen, etc.
  • the obstacle sensing device may include at least one sensor for acquiring sensing signals from the drone 200 in at least one direction.
  • the obstacle sensing device may include a sensor for detecting obstacles in front of the drone 200.
  • the obstacle sensing device may include two sensors for detecting obstacles in front of and behind the drone 200, respectively.
  • the obstacle sensing device may include four sensors for detecting obstacles in the front, rear, left, and right of the drone 200, respectively.
  • the obstacle sensing device may include five sensors, which are used to detect obstacles in the front, rear, left, right, and above of the drone 200, respectively.
  • the obstacle sensing device may include six sensors for detecting obstacles in front, rear, left, right, above, and below the drone 200, respectively.
  • the sensors in the obstacle sensing device can be implemented separately or integrated.
  • the detection direction of the sensor can be set according to specific needs to detect obstacles in various directions or combinations of directions, and is not limited to the above-mentioned forms disclosed in this application.
  • the drone 200 may have one or more propulsion units to support the drone 200 to fly in the air.
  • the one or more propulsion units can make the drone 200 at one or more, two or more, three or more, four or more, five or more, six or more free angles move.
  • the drone 200 can rotate around one, two, three, or more rotation axes.
  • the rotation axes may be perpendicular to each other.
  • the rotation axes can be maintained perpendicular to each other during the entire flight of the drone 200.
  • the rotation axis may include a pitch axis, a roll axis, and/or a yaw axis.
  • the drone 200 can move in one or more dimensions.
  • the drone 200 can move upward due to the lifting force generated by one or more rotors.
  • the drone 200 can move along the Z axis (which can be upward relative to the drone 200), the X axis, and/or the Y axis (which can be lateral).
  • the drone 200 can move along one, two, or three axes that are perpendicular to each other.
  • the drone 200 may be a rotary wing aircraft.
  • the drone 200 may be a multi-rotor drone that may include multiple rotors.
  • the multiple rotors can rotate to generate lifting force for the drone 200.
  • the rotor may be a propulsion unit, which allows the drone 200 to move freely in the air.
  • the rotor can rotate at the same rate and/or can generate the same amount of lift or thrust.
  • the rotor can rotate at different speeds at will, generating different amounts of lifting force or thrust, and/or allowing the drone 200 to rotate.
  • one, two, three, four, five, six, seven, eight, nine, ten or more rotors may be provided on the drone 200.
  • These rotors can be arranged such that their rotation axes are parallel to each other.
  • the rotation axis of the rotors can be at any angle with respect to each other, which can affect the movement of the drone 200.
  • the drone 200 may have multiple rotors.
  • the rotor may be connected to the main body of the drone 200, and the main body may include a control unit, an inertial measurement unit (IMU), a processor, a battery, a power supply, and/or other sensors.
  • the rotor may be connected to the body by one or more arms or extensions branching from the central part of the body.
  • one or more arms may extend radially from the central body of the drone 200, and may have a rotor at or near the end of the arm.
  • each device in FIG. 1 does not constitute a limitation on the application scenario of the location information acquisition method.
  • FIG. 2 is a schematic flowchart of a method for acquiring location information according to an embodiment of the present application.
  • the location information acquisition method can be applied to devices such as a location information acquisition system, a remote control terminal, a pan/tilt camera, or a movable platform, to accurately acquire the location information of a target object.
  • devices such as a location information acquisition system, a remote control terminal, a pan/tilt camera, or a movable platform, to accurately acquire the location information of a target object.
  • the detailed description will be given below.
  • the method for acquiring location information may include step S101 to step S105, etc., which may be specifically as follows:
  • multiple images can be continuously collected by the camera, or images can be collected at preset intervals to obtain multiple images, and the time corresponding to each image collected at this time (that is, the collection time point) is different.
  • an image acquisition instruction may be generated, and an image acquisition instruction may be sent to the camera, and based on the image acquisition instruction, the camera may be controlled to acquire multiple images, and the multiple images returned by the camera may be received.
  • the offset angle may be caused by the camera shaking during the process of capturing the image, or the offset angle may be generated by the camera moving during the process of capturing the image.
  • acquiring the offset angle of the camera when acquiring multiple images may include: acquiring the time when the multiple images are acquired to acquire multiple moments; acquiring the angular velocity corresponding to the camera at multiple moments; The angular velocity determines the offset angle of the camera.
  • the time corresponding to each image acquisition can be acquired. For example, you can receive the time when the camera sends multiple image acquisitions, get multiple times, and then acquire The angular velocity of the camera at each moment.
  • the angular velocity detected by the angular velocity sensor can be used as the angular velocity of the camera.
  • acquiring the angular velocity corresponding to the camera at multiple moments may include: acquiring the angular velocity corresponding to the camera in the three-axis direction at each moment through a preset angular velocity sensor.
  • the angular velocity corresponding to each moment at this time includes the angular velocities corresponding to the three-axis directions of the X-axis, Y-axis, and Z-axis at that moment.
  • a set of angular velocity 1 is obtained at time a
  • a set of angular velocity 2 is obtained at time b
  • a set of angular velocity 3 is obtained at time c.
  • the angular velocity sensor may be an inertial measurement unit (IMU).
  • the offset angle of the camera can be determined according to the angular velocities corresponding to multiple moments. For example, a set of angular velocities obtained at time a 1, a set of angular velocities 2 obtained at time b, and a set of angular velocities 3 obtained at time c can be performed. Integrate operation to obtain the angle value, which is the offset angle, and the offset angle is used as the offset angle of the camera.
  • the offset angle may include the offset angle in the pitch direction (that is, the offset angle in the X-axis direction), the offset angle in the yaw direction (that is, the offset angle in the Y-axis direction), and The offset angle in the roll direction (that is, the offset angle in the Z-axis direction), etc.
  • the multiple images collected may include a target, or other objects, etc.
  • the target can be flexibly set according to actual needs, and the specific content is not limited here.
  • the target can be a complete object or a point, for example, the target can be any object such as license plates, people, animals, plants, vehicles, buildings, or stars, or even non-solid or non-three-dimensional targets, such as flames. , Clouds, or water bodies, etc., as long as the target can be imaged through one or more pixels of the camera's photosensitive element.
  • each image collected may include the target object.
  • the method for determining the target object can be flexibly set according to actual needs. For example, a selection instruction input by the user can be received to determine the target object, or the target object can be automatically identified and selected; and so on. For example, it is possible to obtain the picture collected by the camera, and set the center point of the picture as the target object, or set the object area where the center of the picture is located as the target object.
  • acquiring the coordinate position of the target in the multiple images may include: receiving a selection instruction input by the user; selecting the target from the multiple images according to the selection instruction, and determining the coordinate position of the target.
  • the voice signal can be flexibly set according to actual needs.
  • the type of the voice signal can be Chinese or English, etc.
  • the content of the voice signal can be "select object A as the target” or "get object”
  • the gesture can be a fist gesture corresponding to selecting the license plate as the target, the scissors gesture corresponding to selecting the scissors as the target, etc.
  • the touch operation can be a click operation, and the fingerprint information can be the fingerprint input corresponding to user A's right thumb.
  • the human is the target, and the fingerprint input of the index finger of user B's right hand corresponds to selecting the dog as the target. Then, the target can be selected from the multiple images according to the selection instruction, and the coordinate position of the target in each image can be determined.
  • the coordinate position can be the pixel coordinates of the target on the image.
  • the center coordinates of the target object may be used as the coordinate position of the target object, for example, the center coordinates of the vehicle may be used as the coordinate position of the vehicle.
  • receiving a selection instruction input by a user, selecting a target object from multiple images according to the selection instruction, and determining the coordinate position of the target object may include: performing object recognition on the multiple images to obtain an object corresponding to at least one object Identification; display an object list containing at least one object identification; receive a selection instruction input by the user based on the object list; select an object identification from the object list according to the selection instruction; determine the target object from multiple images according to the object identification, and determine the target object The coordinate position.
  • the image collected by the current camera can be displayed on the display screen (ie, the display) for the user to view.
  • the user's input selection can be received in the interface for displaying the image.
  • Instructions in order to determine the target according to the selection instructions For example, as shown in Figure 3, after the camera collects an image, it can identify all objects in the image one by one through a preset recognition model, generate a recognized object list, and display the object list.
  • the object list includes a Or object identifiers corresponding to multiple objects, the object identifiers may be composed of numbers, letters, and/or Chinese, etc., and the object identifiers may be the names or numbers of the objects.
  • the object is identified as the name of the object as an example.
  • vehicles, license plates, windows, wheels, lights, water, trees, roads, etc. can be identified.
  • the user can receive the operation of clicking or pressing the position of the license plate in the object list (that is, generating a selection instruction), and then, based on the position of the user clicking or pressing the operation, it is determined that the user has selected the license plate, and the area where the license plate is located in the image can be extracted as Target, and detect the position of the license plate in the image, and get the coordinate position of the target.
  • the preset recognition model can be flexibly set according to actual needs.
  • the recognition model can be a target detection algorithm SSD or YOLO, and the recognition model can also be a convolutional neural network R-CNN or Faster R-CNN.
  • the preset recognition model is a trained recognition model. For example, multiple sample images containing different types of objects can be obtained, and the recognition model can be trained based on the multiple sample images to obtain the trained recognition model.
  • receiving a selection instruction input by a user, selecting a target object from multiple images according to the selection instruction, and determining the coordinate position of the target object may include: selecting an image from the multiple images for display; receiving the user based The displayed image inputs the selection instruction generated by the touch operation; according to the selection instruction, the touch center point of the touch operation is set as the target object, or the object area where the touch center point is located is set as the target object; the coordinate position of the target object is determined.
  • the image collected by the camera can be displayed.
  • the selected instruction input by the user can be received in the displayed image to determine the target object. For example, as shown in Figure 4, if there is a vehicle in the image captured by the camera, it can receive the user's finger clicking or pressing the license plate on the vehicle (that is, generating a selection instruction). At this time, you can click or press the finger (that is, touch operation).
  • receiving a selection instruction input by a user, selecting a target object from multiple images according to the selection instruction, and determining the coordinate position of the target object may include: receiving a voice signal input by the user or a selection instruction generated by a gesture; The instruction selects the target object corresponding to the voice signal or gesture from the multiple images, and determines the coordinate position of the target object.
  • the user can input a voice signal related to the coordinate position of the target object
  • the type of the voice signal can be Chinese or English, etc.
  • the content of the voice signal can be "select object A as the target” or "acquire the position of object A" Information” etc.
  • a selection instruction for the target object can be generated, for example, according to the selection instruction, the object A is selected as the target object from multiple images, and the coordinate position of the target object in the image is detected.
  • the mapping relationship between different gestures can be preset.
  • fist gesture can select license plate as the target object
  • scissors gesture can select scissors as the target object
  • OK gesture Corresponding to selecting the vehicle as the target, the user can input the gesture corresponding to the selected target.
  • the object B can be selected as the target object from multiple images according to the selection instruction, and the coordinate position of the target object in the image can be detected.
  • obtaining the coordinate position of the target in the multiple images may include: extracting features of the multiple images to obtain target feature information; identifying the target in the multiple images according to the target feature information, and determining the target location Coordinate location.
  • the target can be automatically detected. For example, feature extraction can be performed on each image collected by the camera to obtain target feature information.
  • the target feature information can be flexibly set according to actual needs.
  • the target feature information can be the feature information of the license plate or the feature information of a certain animal.
  • the characteristic information of a certain plant or the characteristic information of a person, etc., according to the target characteristic information, the object in the image collected by the camera and the object area where the object is located can be identified. For example, for a drone equipped with a gimbal camera, after the drone takes off, the gimbal camera can automatically perform a picture inspection.
  • the camera When the camera collects an image of a vehicle illegally, it can automatically identify the illegally parked vehicle in the image The location of the license plate.
  • a text input box can be displayed for the user to input the target to be detected.
  • the "license plate” input by the user in the text input box can be received, and then the user can be determined based on the "license plate” input by the user
  • the license plate is selected, and the license plate in the image can be recognized through the preset recognition model, and the area where the license plate is located in the image is extracted as the target object, and the coordinate position of the target object in the image is detected.
  • S104 Determine the distance between the target object and the camera according to the coordinate position and the offset angle.
  • the distance between the target and the camera is the depth information between the target and the camera. Since there may be a certain amount of jitter in the process of acquiring multiple images, it is possible to accurately obtain depth information through a single camera based on the amount of jitter. Each two images can be used as a group to obtain depth information. In order to improve the accuracy of depth information acquisition, it can ensure that the focal lengths of the corresponding cameras between each group of images are the same, so as to reduce the difference in zoom or focus in a group of images. The resulting error, so as to achieve a more accurate acquisition of depth information.
  • determining the distance between the target object and the camera according to the coordinate position and the offset angle may include: obtaining the parameters of the camera; and determining the distance between the target object and the camera according to the parameters, the coordinate position, and the offset angle. distance.
  • the trigonometric function operation can be used to determine the distance between the target and the camera. the distance. Specifically, the focal length of the camera, the pixel size and pixel pitch of the sensor, etc. can be obtained to obtain the parameters of the camera.
  • the distance between the target and the camera can be determined according to the parameters of the camera, the coordinate position of the target, and the offset angle of the camera. Among them, it can be determined only according to the coordinate position of the target on any two images. The distance between the target object and the camera, or the coordinate position of the target object on multiple images can be averaged to obtain the target coordinate position, and the distance between the target object and the camera can be determined based on the target coordinate position.
  • B represents the distance difference when the camera collects two images of the person
  • z represents the distance between the target and the camera, which can be the pixel distance
  • f represents the focal length of the camera
  • x1 represents the target in one of the images On the abscissa.
  • the acquisition method of B can be: in the process of capturing images by the camera, the time corresponding to each image acquisition can be acquired, and then the acceleration corresponding to the camera at each time can be acquired.
  • the acceleration detected by the IMU can be used as the camera's acceleration Acceleration.
  • the corresponding speed can be obtained according to the acceleration
  • the distance can be calculated according to the speed and the corresponding time when the camera collects the two images of the person, and then the distance difference corresponding to the camera when the two images are collected by the camera can be obtained.
  • the actual distance (ie depth information) between the target and the camera in the three-dimensional space can be determined according to the pixel size u and the pixel pitch d of the sensor.
  • the distance between the target and the camera can be z*d+ u.
  • the focal length f1 corresponding to the camera when capturing the first image is inconsistent with the focal length f2 corresponding to the camera when capturing the second image
  • the focal length f1 of the camera corresponds to the target object in the first image captured
  • the abscissa of is x2'
  • the focal length f2 of the camera corresponds to the abscissa of the target in the first image acquired is x2".
  • the distance z between the target and the camera can be calculated, which can be the pixel distance, and then the actual distance between the target and the camera in the three-dimensional space is determined according to the pixel size u and pixel pitch d of the sensor ( That is depth information).
  • S105 Determine location information of the target object according to the distance between the target object and the camera.
  • the position information of the target object may be determined according to the distance between the target object and the camera, and the position information may be longitude and latitude information.
  • determining the position information of the target object according to the distance between the target object and the camera may include: obtaining the physical distance between the camera and a preset positioning device; according to the physical distance, the position obtained by positioning the positioning device, The distance between the target and the camera determines the location information of the target.
  • a positioning device may be preset.
  • the positioning device may be a GPS, and the positioning device may perform positioning through its own positioning function to obtain the position.
  • FIG. 7 FIG. 7 is a two-dimensional plan view in the XY direction
  • the physical distance L1 between the camera and the preset positioning device that is, the distance between the installation position of the camera and the positioning device.
  • the position information (X2, Y2, Z2) of the camera can be determined according to the physical distance L1 between the camera and the preset positioning device and the position (X1, Y1, Z1) obtained by the positioning device.
  • the relative positions of the camera and the positioning device in the position information acquisition system are fixed, which can reduce the complexity of the position information acquisition system and reduce the computational resource consumption of the position information acquisition system.
  • the relative positions of the camera and the positioning device in the position information acquisition system are variable. For example, the relative positions of the camera and the positioning device can be adjusted according to actual needs, which can ensure a better user experience.
  • the position information of the target includes multiple pieces of information.
  • the method for acquiring position information may further include: forming a movement trajectory of the target according to the multiple pieces of position information of the target; acquiring the movement trajectory The start time and end time of, determine the moving time according to the start time and end time; determine the moving speed of the target according to the moving trajectory and moving time.
  • the moving speed of the target can be obtained during the movement of the target.
  • the trajectory of the target is formed to determine the speed of the target.
  • the camera can collect multiple images, each two images can be divided into one group to obtain multiple groups of images, and then according to each group of images, a position information of the target object can be calculated in the above-mentioned manner.
  • multiple sets of images can be calculated correspondingly to obtain multiple location information of the target.
  • the connection between the multiple location information of the target can form the moving track of the target.
  • the location information acquiring method may further include: acquiring the moving direction of the target object. For example, you can use the positioning device to locate its own moving direction, taking the moving direction of the positioning device as the moving direction of the camera, and then according to the moving direction of the camera and the relative position change of the target in the multiple images collected by the camera. Determine the moving direction of the target.
  • first distance between the camera and the ground and the second distance between the camera and the target can also be detected, and the height of the target can be determined according to the difference between the first distance and the second distance.
  • the method for acquiring position information may further include: outputting the moving speed and the moving direction of the target object.
  • the moving speed and moving direction of the target can be output through voice or display screen.
  • the image of the target object can be collected through the camera, and the image and the moving speed and moving direction of the target object can be displayed on the display screen, or the moving speed and moving direction of the target object can be sent to the mobile terminal, and the mobile terminal can be controlled to output the target The moving speed and moving direction of the object on the image captured by the camera; etc.
  • outputting the moving speed and moving direction of the target object may include: displaying the moving speed and moving direction of the target object in the displayed map interface.
  • the moving speed and moving direction of the target can be displayed in the map interface.
  • the camera loaded by the drone can collect ground information in the field of view, generate a map interface based on the ground information, and then display the map interface in the display interface, and in the displayed map interface, based on the location information of the target Mark the target, and display the moving speed and moving direction.
  • the target can be displayed flashing, or the target can be marked with a preset color, and the moving speed and moving direction of the target can be displayed.
  • outputting the moving speed and moving direction of the target object may include: broadcasting the moving speed and moving direction of the target object through voice.
  • the moving speed and moving direction of the target can be broadcasted by voice.
  • the decibel size of the voice broadcast and the language of the voice broadcast (such as Chinese or English) can be carried out according to actual needs.
  • Flexible settings At this time, the voice broadcast can be automatically closed after the number of loops reaches a preset number, or the user can click the close button to close, etc.
  • the preset number of times can be flexibly set according to actual needs.
  • outputting the moving speed and moving direction of the target may include: displaying the moving speed and moving direction of the target in a pop-up window on the display interface.
  • the moving speed and moving direction of the target can be displayed in the pop-up window in the display interface.
  • the size, background color and display position of the pop-up window can be adjusted according to actual needs. Flexible settings.
  • the dialog box displayed by the pop-up window can be automatically closed after the display time reaches the preset time, or the user can click the close button in the upper right corner to close, etc.
  • the preset time can be flexibly set according to actual needs.
  • outputting the moving speed and moving direction of the target object may include: displaying the image collected by the camera where the target object is located in the display interface, and displaying the moving speed and moving direction of the target object in the image.
  • the image collected by the camera where the target is located can be displayed in the display interface, the area where the target is located in the image, and the location information of the target can be displayed in the display interface, and The moving speed and moving direction of the target are displayed in the image.
  • the location information acquisition method may further include: outputting the location information of the target.
  • the location information of the target object can be output through voice or a display screen.
  • the image of the target object can be collected through the camera, the image and the position information of the target object can be displayed on the display screen, or the position information of the target object can be sent to the mobile terminal, and the mobile terminal can be controlled to output the target object collected by the camera. Location information on the image; etc.
  • mobile terminals may include terminals such as mobile phones or computers.
  • the location information of the target object can be actively sent to the mobile terminal.
  • the control instruction carrying the information such as the location information and image of the target is sent to the mobile terminal, and based on the control instruction, the mobile terminal is controlled to display the image on the display screen, and the location information of the target is output by voice, or the target is displayed on the display screen Position information on the image captured by the camera.
  • the location information of the target object can be stored, and it can be detected whether the acquisition request sent by the mobile terminal is received.
  • the acquisition request sent by the mobile terminal When the acquisition request sent by the mobile terminal is received, it can be based on the acquisition request Send the location information of the target to the mobile terminal. For example, based on the acquisition request, a control command carrying the location information and image of the target can be sent to the mobile terminal, and the mobile terminal can be controlled to display or display on the display screen based on the control command.
  • the voice broadcasts the position information of the target object on the image collected by the camera.
  • the location information of the target can be sent to a preset mailbox or instant messaging window (such as a mini program, an official account, a designated QQ window, or a designated WeChat window), where, The type of mailbox or the type of instant messaging can be flexibly set according to actual needs.
  • a preset mailbox or instant messaging window such as a mini program, an official account, a designated QQ window, or a designated WeChat window
  • the camera when it is a specific type of camera, in addition to collecting images, it can also calculate or display other information of the target, such as the distance of the target relative to the camera, or the temperature of the target, or the target The height information of the object, etc., at this time, in addition to the position information of the target object, information such as the temperature or height of the target object can also be output.
  • the setting instruction input by the user can be received, and the output mode of the position information of the target object can be set according to the setting instruction.
  • outputting the location information of the target object may include: identifying the target object in the displayed map interface, and displaying the location information of the target object.
  • the location information of the target can be displayed in the map interface.
  • the camera loaded by the drone can collect ground information in the field of view, generate a map interface based on the ground information, and then display the map interface in the display interface, and in the displayed map interface, based on the location information of the target.
  • the target is marked, for example, the target can be displayed flashing, or the target can be marked with a preset color, and the position information of the target can be displayed.
  • outputting the location information of the target object may include: broadcasting the location information of the target object through voice.
  • the location information of the target object can be broadcasted by voice, where the decibel size of the voice broadcast and the language of the voice broadcast (such as Chinese or English) can be flexibly set according to actual needs.
  • the voice broadcast can be automatically closed after the number of loops reaches a preset number, or the user can click the close button to close, etc.
  • the preset number of times can be flexibly set according to actual needs.
  • outputting the location information of the target may include: displaying the location information of the target in a pop-up window on the display interface.
  • the position information of the target can be displayed in a pop-up window in the display interface.
  • the size, background color, and display position of the pop-up window can be flexibly set according to actual needs.
  • the position information of the vehicle can be displayed as (X, Y) in a pop-up window.
  • the dialog box displayed by the pop-up window can be automatically closed after the display time reaches the preset time, or the user can click the close button in the upper right corner to close, etc.
  • the preset time can be flexibly set according to actual needs.
  • outputting the location information of the target object may include: displaying the image captured by the camera where the target object is located in the display interface, marking the area where the target object is located in the image, and displaying the location of the target object in the display interface information.
  • the image collected by the camera where the target is located can be displayed in the display interface, and the area where the target is located in the image can be marked, and the target can be displayed in the lower or upper area of the display interface.
  • Location information you can send the location information of the target to an external display device (such as a mobile phone or a computer, etc.), and control the display device to display the image collected by the camera on the display screen, and mark the area where the target is located in the image , And display the location information of the target in the lower or upper area of the display interface.
  • marking the area where the target object is located in the image may include: determining the center position of the target object in the image; drawing a polygon or circle circumscribed to the target object according to the center position to distinguish it from the target object in the image
  • the background color of the target color is set to the color of the polygon or circle to get the area where the target object is located.
  • the center position of the target can be determined in the image, such as the center position of the ball.
  • the polygon or circle circumscribed by the target In Figure 10, the quadrilateral circumscribed to the sphere is drawn to distinguish the target color from the background color of the target in the image. Set the color of the polygon or circle to get the target. Area, where the target color can be flexibly set according to actual needs.
  • the user’s usage habits for example, the habit of marking the target in red
  • characteristics for example, whether the user is red-green color blind, etc.
  • the usage habits or characteristics of the label determine the color of the target object.
  • marking the area where the target object is located in the image may include: extracting the contour of the target object from the image; and marking the area where the target object is located in a preset color according to the contour.
  • the contour of the target can be extracted from the image, and the contour of the ball can be extracted in Figure 11.
  • the area where the target object is located is marked with a preset color according to the outline, where the preset color can be flexibly set according to actual needs. It is also possible to receive the color setting instruction input by the user in the setting interface, and to set the target color or preset color according to the color setting instruction.
  • the location information acquisition method may also include: receiving recording instructions, shooting instructions, distance measurement instructions or temperature measurement instructions; controlling the camera according to the recording instructions to record the captured images of the target, or controlling the camera according to the shooting instructions Take a picture of the collected target object, or control the camera to measure the target object according to the distance measurement instruction, or control the camera to measure the temperature of the target object according to the temperature measurement instruction.
  • the method for acquiring position information may further include: acquiring the amount of image shake between two adjacent images according to the offset angle; Display multiple images in sequence, and correct the displayed image according to the amount of screen shake.
  • the offset angle of the camera when acquiring two adjacent images can be regarded as the adjacent two images.
  • the amount of picture shake can be a vector value with a direction.
  • the displayed image can be performed based on the amount of picture shake.
  • Correction that is, the image is moved according to the amount of screen shake.
  • the camera sequentially collects the first image, the second image, and the third image in chronological order, and the corresponding first offset angle when the camera collects the first image and the second image can be determined in the above-mentioned manner.
  • the frame rate of image collection can be increased to increase the frequency of image correction, reduce the amount of screen jitter, and improve the stability of image display.
  • multiple images can be collected by a camera, the offset angle of the camera when multiple images are collected, and the coordinate position of the target object in the multiple images can be obtained, and then the target object can be determined according to the coordinate position and the offset angle The distance to the camera.
  • the position information of the target can be determined according to the distance between the target and the camera.
  • the solution uses a single camera to obtain the location information of the target, which reduces the cost, and accurately obtains the location information of the target based on the offset angle and the distance between the target and the camera, which improves the accuracy of the location information acquisition. .
  • the processor 111 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • ROM Read-Only Memory
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • the processor 111 is configured to call a computer program stored in the memory 112, and implement the location information acquisition method provided in the embodiment of the present application when the computer program is executed. For example, the following steps may be performed:
  • FIG. 13 is a schematic block diagram of a remote control terminal according to an embodiment of the present application.
  • the remote control terminal 12 may include a processor 121 and a memory 122, and the processor 121 and the memory 122 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 121 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 122 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • ROM Read-Only Memory
  • the memory 122 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • the remote control terminal 12 may also include a display 123, etc., for displaying images and location information of the target, etc.
  • the display 123 may also display other information such as the moving speed of the target and the moving direction of the target.
  • the specific content is not limited here. .
  • the processor 121 is configured to call a computer program stored in the memory 122, and implement the location information acquisition method provided in the embodiment of the present application when the computer program is executed. For example, the following steps may be performed:
  • the position information of the target can also be displayed on the display 123.
  • FIG. 14 is a schematic block diagram of a pan-tilt camera according to an embodiment of the present application.
  • the pan/tilt camera 13 may include a processor 131 and a memory 132, and the processor 131 and the memory 132 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 131 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 132 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • ROM Read-Only Memory
  • the memory 132 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • the pan/tilt camera 13 may also include a camera 133, a pan/tilt 134, etc.
  • the camera 133 collects an image containing a target object, and the pan/tilt 134 is used to carry the camera 133 to drive the camera 133 to move to a suitable position and accurately collect the required images.
  • the processor 131 is configured to call a computer program stored in the memory 132, and implement the location information acquisition method provided in the embodiment of the present application when the computer program is executed. For example, the following steps may be performed:
  • the processor 131 when acquiring the angular velocity corresponding to the camera at multiple moments, the processor 131 further executes: acquiring the angular velocity corresponding to the camera in the three-axis directions at each moment through a preset angular velocity sensor.
  • the processor 131 when acquiring the coordinate position of the target in the multiple images, the processor 131 further executes: receiving a selection instruction input by the user; selecting the target from the multiple images according to the selection instruction, and determining the coordinates of the target Location.
  • the processor 131 when receiving a selection instruction input by a user, selecting a target object from multiple images according to the selection instruction, and determining the coordinate position of the target object, the processor 131 also executes: performing object recognition on the multiple images to obtain Object identification corresponding to at least one object; displaying an object list containing at least one object identification; receiving a selection instruction input by the user based on the object list; selecting an object identification from the object list according to the selection instruction; determining a target from multiple images according to the object identification Object, and determine the coordinate position of the target object.
  • the processor 131 when receiving a selection instruction input by the user, selecting a target from multiple images according to the selection instruction, and determining the coordinate position of the target, the processor 131 further executes: selecting an image from the multiple images Display; receive the selection instruction generated by the user inputting the touch operation based on the displayed image; set the touch center point of the touch operation as the target object according to the selection instruction, or set the object area where the touch center point is located as the target object; determine the target object Coordinate location.
  • the processor 131 when receiving a selection instruction input by a user, selecting a target object from multiple images according to the selection instruction, and determining the coordinate position of the target object, the processor 131 further executes: receiving a voice signal or gesture input input by the user The selection instruction; select the target object corresponding to the voice signal or gesture from multiple images according to the selection instruction, and determine the coordinate position of the target object.
  • the processor 131 when acquiring the coordinate position of the target in the multiple images, the processor 131 further executes: extracting features of the multiple images to obtain target feature information; and identifying the target in the multiple images according to the target feature information , And determine the coordinate position of the target.
  • the processor 131 when determining the distance between the target object and the camera according to the coordinate position and the offset angle, the processor 131 further executes: acquiring the parameters of the camera; determining the target according to the parameters, the coordinate position, and the offset angle The distance between the object and the camera.
  • the processor 131 when acquiring the parameters of the camera, the processor 131 further executes: acquiring the focal length of the camera, the pixel size of the sensor, and the pixel pitch to obtain the parameters of the camera.
  • the position information of the target includes multiple pieces of information.
  • the processor 131 further executes: forming the movement track of the target according to the multiple position information of the target; Starting time and ending time, the moving time is determined according to the starting time and ending time; the moving speed of the target is determined according to the moving trajectory and moving time.
  • the processor 131 further executes: acquiring the moving direction of the target object.
  • the processor 131 after determining the moving speed of the target object according to the moving trajectory and the moving time, the processor 131 further executes: outputting the moving speed and moving direction of the target object.
  • the processor 131 when outputting the moving speed and moving direction of the target, the processor 131 further executes: displaying the moving speed and moving direction of the target in the displayed map interface; and broadcasting the moving speed and moving direction of the target through voice Orientation; Or, the pop-up window in the display interface displays the moving speed and moving direction of the target; Or, the image collected by the camera where the target is displayed in the display interface, and the moving speed and moving direction of the target are displayed in the image.
  • the processor 131 after determining the location information of the target object, the processor 131 further executes: outputting the location information of the target object.
  • the processor 131 when outputting the location information of the target, the processor 131 further executes: in the displayed map interface, identify the target, and display the location information of the target.
  • the processor 131 when outputting the location information of the target, the processor 131 further executes: broadcast the location information of the target through voice; or, display the location information of the target in a pop-up window on the display interface; or, on the display interface
  • the image collected by the camera where the target is displayed is displayed inside, the area where the target is located is marked in the image, and the location information of the target is displayed in the display interface.
  • the processor 131 when marking the area where the target object is located in the image, the processor 131 further executes: determining the center position of the target object in the image; drawing a polygon or circle circumscribed to the target object according to the center position to distinguish The target color of the background color of the target in the image is set to the color of a polygon or a circle to obtain the area where the target is located.
  • the processor 131 after acquiring the offset angle of the camera when acquiring multiple images, the processor 131 further executes: acquiring the amount of picture shake between two adjacent images according to the offset angle; Display multiple images, and correct the displayed image according to the amount of screen shake.
  • FIG. 15 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the mobile platform 14 may include a processor 141 and a memory 142, and the processor 141 and the memory 142 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 141 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 142 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • ROM Read-Only Memory
  • the memory 142 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • the movable platform 14 may also include a camera 143, etc.
  • the camera 143 collects images containing a target object
  • the mobile platform 14 may also include a pan/tilt for carrying the camera, which can drive the camera 133 to move to a suitable position to accurately collect the desired Images, etc.
  • the type of the movable platform 14 can be flexibly set according to actual needs.
  • the movable platform 14 can be a mobile terminal, a drone, a robot, or a vehicle, etc., and the vehicle can be an unmanned vehicle.
  • the processor 141 is configured to call a computer program stored in the memory 142, and implement the location information acquisition method provided in the embodiment of the present application when the computer program is executed. For example, the following steps may be performed:
  • An embodiment of the present application also provides a computer program.
  • the computer program includes program instructions, and the processor executes the program instructions to implement the location information acquisition method provided in the embodiments of the present application.
  • the embodiment of the present application also provides a storage medium, the storage medium is a computer-readable storage medium, the storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the The location information acquisition method.
  • the storage medium may be the internal storage unit of the drone state management system or the remote control terminal described in any of the foregoing embodiments, such as the hard disk or memory of the remote control terminal.
  • the storage medium can also be an external storage device of the remote control terminal, such as a plug-in hard disk equipped on the remote control terminal, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash card (Flash Card) Wait.
  • the computer program stored in the storage medium can execute any location information acquisition method provided in the embodiments of the present application, it can implement what can be achieved by any location information acquisition method provided in the embodiments of the present application.
  • the beneficial effects refer to the previous embodiment for details, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种位置信息获取方法、设备及存储介质,包括,通过相机(133)采集多张图像(S101);获取相机(133)在采集多张图像时的偏移角度(S102);获取多张图像中目标物的坐标位置(S103);根据坐标位置和偏移角度,确定目标物与相机(133)之间的距离(S104);根据目标物与相机(133)之间的距离,确定目标物的位置信息(S105),提高了位置信息获取的准确性。

Description

位置信息获取方法、设备及存储介质 技术领域
本申请涉及数据处理技术领域,尤其涉及一种位置信息获取方法、设备及存储介质。
背景技术
随着无人机的应用场景增加,对深度图、测距以及目标位置获取等多方面的需求越来越强烈。以目标位置获取为例,现有的目标位置方法一般是通过无人机预先设置的至少两个摄像头来采集图像,并根据多张图像中像素位置关系来确定目标位置,以双摄像头为例,此时需要双摄像头之间有一定的距离,这样使得无人机的体积无法做小,当无人机的体积较小时无法加载双摄像头,使得实现双摄像获取头目标位置的无人机的成本高。以及双摄像头之间的相对位置固定等,并且双摄像头之间的相对位置出厂前需要标定好,出厂后不能变化,如果发现变化,则需要对双摄像头重新进行矫正,而如果切双目位置变化过大,可能无法矫正,给使用带来了不便,使得获取到的目标位置准确性较低。此外,限于成本原因,双目摄像头的分辨率较低及探测距离较近,降低了目标位置获取的准确性。
发明内容
本申请实施例提供一种位置信息获取方法、设备及存储介质,可以提高位置信息获取的准确性。
第一方面,本申请实施例提供了一种位置信息获取方法,包括:
通过相机采集多张图像;
获取所述相机在采集所述多张图像时的偏移角度;
获取所述多张图像中目标物的坐标位置;
根据所述坐标位置和所述偏移角度,确定所述目标物与所述相机之间的距离;
根据所述目标物与所述相机之间的距离,确定所述目标物的位置信息。
第二方面,本申请实施例还提供了一种位置信息获取***,包括:
存储器,用于存储计算机程序;
处理器,用于调用所述存储器中的计算机程序,以执行本申请实施例提供的任一种位置信息获取方法。
第三方面,本申请实施例还提供了一种遥控终端,包括:
显示器,用于显示图像以及显示所述目标物的位置信息;
存储器,用于存储计算机程序;
处理器,用于调用所述存储器中的计算机程序,以执行本申请实施例提供的任一种位置信息获取方法。
第四方面,本申请实施例还提供了一种云台相机,包括:
相机,用于采集图像;
云台,用于搭载所述相机;
存储器,用于存储计算机程序;
处理器,用于调用所述存储器中的计算机程序,以执行本申请实施例提供的任一种位置信息获取方法。
第五方面,本申请实施例还提供了一种可移动平台,包括:
相机,用于采集多张图像;
存储器,用于存储计算机程序;
处理器,用于调用所述存储器中的计算机程序,以执行本申请实施例提供的任一种位置信息获取方法。
第六方面,本申请实施例还提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序被处理器加载,以执行本申请实施例提供的任一种位置信息获取方法。
第七方面,本申请实施例还提供了一种计算机程序,所述计算机程序被处理器加载,以执行本申请实施例提供的任一种位置信息获取方法。
本申请实施例可以通过相机采集多张图像,获取相机在采集多张图像时的偏移角度,以及获取多张图像中目标物的坐标位置,然后可以根据坐标位置和偏移角度,确定目标物与相机之间的距离,此时可以根据目标物与相机之间的距离,确定目标物的位置信息。该方案通过单个相机实现对目标物的位置信息的获取,降低了成本,以及基于偏移角度、目标物与相机之间的距离等准确获取目标物的位置信息,提高了位置信息获取的准确性。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要 使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的位置信息获取方法的应用场景的示意图;
图2是本申请实施例提供的位置信息获取方法的流程示意图;
图3是本申请实施例提供的从物体列表中选择目标物的示意图;
图4是本申请实施例提供的从图像中选择目标物的示意图;
图5是本申请实施例提供的基于对目标物进行自动识别的示意图;
图6是本申请实施例提供的目标物与相机之间距离确定的示意图;
图7是本申请实施例提供的目标物的位置信息确定的示意图;
图8是本申请实施例提供的目标物的位置信息确定的另一示意图;
图9是本申请实施例提供的弹窗显示目标物的位置信息的示意图;
图10是本申请实施例提供的标注目标物并显示目标物的位置信息的示意图;
图11是本申请实施例提供的标注目标物并显示目标物的位置信息的另一示意图;
图12是本申请实施例提供的位置信息获取***的结构示意图;
图13是本申请实施例提供的遥控终端的结构示意图;
图14是本申请实施例提供的云台相机的结构示意图;
图15是本申请实施例提供的可移动平台的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本申请的实施例提供了一种位置信息获取方法、设备及存储介质,用于基 于相采集多张图像时的偏移角度、以及目标物的坐标位置确定目标物与相机之间的距离,并根据目标物与相机之间的距离确定目标物的位置信息,提高了对目标物的位置信息获取的准确性。
其中,该存储介质为计算机可读存储介质,该设备可以包括位置信息获取***、遥控终端、云台相机、以及可移动平台等,位置信息获取***、遥控终端、云台相机、以及可移动平台等的类型可以根据实际需要进行灵活设置,具体内容在此处不做限定。例如,位置信息获取***可以包括相机,或者位置信息获取***可以包括设置有相机的无人机和用于控制该无人机的遥控终端等。
例如,遥控终端可以是设置有显示器和控制按键等的遥控设备,用于与可移动平台建立通信连接,并对可移动平台进行控制,该显示器可以用于显示图像以及显示目标物的位置信息等。该遥控终端还可以是第三方手机或平板电脑等,通过预设的协议与可移动平台建立通信连接,并对可移动平台进行控制。
该云台相机可以包括相机和云台等,该云台可以包括轴臂等,该轴臂可以带动相机移动,例如,通过轴臂控制相机移动到合适位置,以便通过相机采集包含目标物的图像。其中,该相机可以是单目相机,该相机的类型可以是超广角相机、广角相机、长焦相机(即变焦相机)、红外相机、远红外相机、紫外相机、以及飞行时间测距(TOF,Time of Flight)深度相机(简称TOF深度相机)等。
该可移动平台可以包括云台、平台本体、以及相机等,该平台本体可以用于搭载云台,该云台可以搭载相机,从而使得云台可以带动相机移动。具体地,可移动平台的类型可以根据实际需要进行灵活设置。例如,可移动平台可以为移动终端、无人机、机器人或车辆等,该车辆可以是无人驾驶车辆。该无人机可以包括相机、测距装置以及障碍物感知装置等。该无人机还可以包括用于搭载相机的云台,该云台可以带动相机移动到合适位置,以便通过相机采集所需的图像。该无人机可以包括旋翼型无人机(例如四旋翼无人机、六旋翼无人机、或八旋翼无人机等)、固定翼无人机、或者是旋翼型与固定翼无人机的组合,在此不作限定。
可移动平台还可以设置有全球定位***(Global Positioning System,GPS)等定位装置,相机和定位装置之间的位置关系可以在同一平面上,在该平面内相机和定位装置之间可以是在同一直线上、或者形成预设的夹角等;当然,相机和定位装置之间也可分别位于不同的平面上,此时可以对两者之间的位置关 系进行换算。
图1是实施本申请实施例提供的位置信息获取方法的一场景示意图,如图1所示,以可移动平台为无人机为例,遥控终端100与一无人机200通信连接,遥控终端100可以用于控制无人机200的飞行或执行相应的动作,并从无人机200中获取相应的运动信息,运动信息可以包括飞行方向、飞行姿态、飞行高度、飞行速度和位置信息等,并将获取的运动信息发送给遥控终端100,由遥控终端100进行分析及显示等。遥控终端100还可以接收用户输入的控制指令,基于控制指令对无人机200上的测距装置或相机等进行相应的控制。例如,遥控终端100可以接收用户输入的拍摄指令或测距指令,并将拍摄指令或测距指令发送给无人机200,无人机200可以根据拍摄指令控制相机对采集到的画面进行拍摄,或者根据测距指令控制测距装置对目标物进行测距等。
在一些实施方式中,无人机200的障碍物感知装置可以获取无人机200周围的感测信号,通过对感测信号进行分析,可以得到障碍物信息,并在该无人机200的显示器内显示障碍物信息,使得用户可以获知无人机200感知到的障碍物,便于用户控制无人机200避开障碍物。其中,该显示器可以为液晶显示屏,也可以为触控屏等。
在一些实施方式中,障碍物感知装置可以包括至少一个传感器,用于获取来自无人机200的至少一个方向上的感测信号。例如,障碍物感知装置可以包括一个传感器,用于检测无人机200的前方的障碍物。例如,障碍物感知装置可以包括两个传感器,分别用于检测无人机200的前方和后方的障碍物。例如,障碍物感知装置可以包括四个传感器,分别用于检测无人机200的前方、后方、左方、以及右方的障碍物等。例如,障碍物感知装置可以包括五个传感器,分别用于检测无人机200的前方、后方、左方、右方、以及上方的障碍物等。例如,障碍物感知装置可以包括六个传感器,分别用于检测无人机200的前方、后方、左方、右方、上方、以及下方的障碍物。障碍物感知装置中的各个传感器可以是分离实现的,也可以是集成实现的。传感器的检测方向可以根据具体需要进行设置,以检测各种方向或方向组合的障碍物,而不仅限于本申请公开的上述形式。
无人机200可具有一个或多个推进单元,以支持无人机200在空中飞行。该一个或多个推进单元可使得无人机200以一个或多个、两个或多个、三个或多个、四个或多个、五个或多个、六个或多个自由角度移动。在某些情形下, 无人机200可以绕一个、两个、三个或多个旋转轴旋转。旋转轴可彼此垂直。旋转轴在无人机200的整个飞行过程中可维持彼此垂直。旋转轴可包括俯仰轴、横滚轴和/或偏航轴。无人机200可沿一个或多个维度移动。例如,无人机200能够因一个或多个旋翼产生的提升力而向上移动。在某些情形下,无人机200可沿Z轴(可相对无人机200方向向上)、X轴和/或Y轴(可为横向)移动。无人机200可沿彼此垂直的一个、两个或三个轴移动。
无人机200可以是旋翼飞机。在某些情形下,无人机200可以是可包括多个旋翼的多旋翼无人机。多个旋翼可旋转而为无人机200产生提升力。旋翼可以是推进单元,可使得无人机200在空中自由移动。旋翼可按相同速率旋转和/或可产生相同量的提升力或推力。旋翼可按不同的速率随意地旋转,产生不同量的提升力或推力和/或允许无人机200旋转。在某些情形下,在无人机200上可提供一个、两个、三个、四个、五个、六个、七个、八个、九个、十个或更多个旋翼。这些旋翼可布置成其旋转轴彼此平行。在某些情形下,旋翼的旋转轴可相对于彼此呈任意角度,从而可影响无人机200的运动。
无人机200可具有多个旋翼。旋翼可连接至无人机200的本体,本体可包含控制单元、惯性测量单元(inertial measuring unit,IMU)、处理器、电池、电源和/或其他传感器。旋翼可通过从本体中心部分分支出来的一个或多个臂或延伸而连接至本体。例如,一个或多个臂可从无人机200的中心本体放射状延伸出来,而且在臂末端或靠近末端处可具有旋翼。
需要说明的是,图1中的各设备结构并未构成对位置信息获取方法的应用场景的限定。
请参阅图2,图2是本申请一实施例提供的一种位置信息获取方法的流程示意图。该位置信息获取方法可以应用于位置信息获取***、遥控终端、云台相机或可移动平台等设备中,用于准确获取目标物的位置信息。以下将进行详细说明。
如图2所示,该位置信息获取方法可以包括步骤S101至步骤S105等,具体可以如下:
S101、通过相机采集多张图像。
其中,可以通过相机连续采集多张图像,或每间隔预设时间采集图像,得到多张图像,此时采集得到的每张图像对应的时刻(即采集的时间点)不同。
例如,可以生成图像采集指令,并向相机发送图像采集指令,基于该图像 采集指令控制相机采集多张图像,接收相机返回的多张图像。
S102、获取相机在采集多张图像时的偏移角度。
其中,偏移角度可以是相机在采集图像的过程中存在抖动而产生,或者是,偏移角度可以是相机在采集图像的过程中存在移动而产生的。
在一些实施方式中,获取相机在采集多张图像时的偏移角度可以包括:获取多张图像采集的时刻,得到多个时刻;获取相机在多个时刻对应的角速度;根据多个时刻对应的角速度确定相机的偏移角度。
为了提高偏移角度获取的精准性,在相机采集图像的过程中,可以获取每张图像采集时所对应的时刻,例如,可以接收相机发送多张图像采集的时刻,得到多个时刻,然后获取相机在每个时刻对应的角速度。
为了提高角速度获取的可靠性可以将角速度传感器检测到的角速度作为相机的角速度。在一些实施方式中,获取相机在多个时刻对应的角速度可以包括:通过预设的角速度传感器获取每个时刻相机在三轴方向上对应的角速度。
例如,可以向预设的角速度传感器发送检测指令,控制角速度传感器检测每个时刻在X轴、Y轴以及Z轴等三轴方向上对应的角速度,得到每个时刻对应的角速度,并接收控制角速度传感器返回的角速度,此时每个时刻对应的角速度均包括该时刻在时X轴、Y轴以及Z轴等三轴方向上对应的角速度。例如,在时刻a得到一组角速度1,在时刻b得到一组角速度2,在时刻c得到一组角速度3。其中,角速度传感器可以是惯性测量单元(Inertial measurement unit,IMU)。
此时,可以根据多个时刻对应的角速度确定相机的偏移角度,例如,可以对在时刻a得到的一组角速度1、时刻b得到的一组角速度2、以及时刻c得到一组角速度3进行积分运算,得到角度值,该角度值即为偏移角度,将该偏移角度作为相机的偏移角度。
其中,偏移角度可以包括俯仰pitch方向上的偏移角度(即X轴方向上的偏移角度)、偏航yaw方向方向上的偏移角度(即Y轴方向上的偏移角度)、以及翻滚roll方向方向上的偏移角度(即Z轴方向上的偏移角度)等。
S103、获取多张图像中目标物的坐标位置。
其中,采集到的多张图像中可以包括目标物,还可以其他物体等,该目标物可以根据实际需要进行灵活设置,具体内容在此处不做限定。该目标物可以是一个完整的物体或者是一个点等,例如,目标物可以是车牌、人物、动物、 植物、车辆、建筑、或星体等任意物体,甚至是非固态或非三维的目标,例如火焰、云团、或水体等,只要该目标物可以经由相机感光元件的一个或多个像素点成像即可。
为了提高目标物的坐标位置获取的准确性,采集到的每张图像中可以均包含目标物。目标物的确定方式可以根据实际需要进行灵活设置,例如,可以接收用户输入的选择指令来确定目标物,或者可以自动识别并选择目标物;等等。例如,可以获取相机采集到的画面,将画面的中心点设置为目标物,或将画面的中心所在的物体区域设置为目标物。
在一些实施方式中,获取多张图像中目标物的坐标位置可以包括:接收用户输入的选择指令;根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置。
为了提高目标物确定的准确性和灵活性,可以接收用户输入的语音信号、手势、触摸操作、或指纹信息等,根据语音信号、手势、触摸操作、或指纹信息等生成选择指令,该语音信号、手势、触摸操作、或指纹信息可以根据实际需要进行灵活设置,例如,该语音信号的类型可以是中文或英文等,该语音信号的内容可以是“选择物体A为目标物”或“获取物体A的位置信息”等;该手势可以是拳头手势对应选择车牌为目标物,剪刀手势对应选择剪刀为目标物等;触摸操作可以是点击操作,指纹信息可以是用户A右手拇指的指纹输入对应选择人为目标物,用户B右手食指指的指纹输入对应选择狗为目标物等。然后可以根据选择指令从多张图像内选择目标物,并确定目标物在每张图像内的坐标位置,该坐标位置可以是目标物在图像上的像素坐标。当目标物为一个整体时,可以将该目标物的中心坐标作为目标物的坐标位置,例如将车辆的的中心坐标作为车辆的坐标位置。
在一些实施方式中,接收用户输入的选择指令,根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置可以包括:对多张图像进行物体识别,得到至少一个物体对应的物体标识;显示包含至少一个物体标识的物体列表;接收用户基于物体列表输入的选择指令;根据选择指令从物体列表中选择一个物体标识;根据物体标识从多张图像中确定目标物,并确定目标物的坐标位置。
为了提高目标物确定的灵活性和可靠性,可以将当前相机采集到的图像在显示屏(即显示器)内进行显示,以供用户查看,此时可以在显示图像的界面内接收用户输入的选择指令,以便根据选择指令确定目标物。例如,如图3所示, 在相机采集到图像后,可以通过预设的识别模型对图像内的所有物体进行一一识别,生成识别到的物体列表,并显示物体列表,该物体列表包括一个或多个物体对应的物体标识,该物体标识可以由数字、字母和/或中文等组成,该物体标识可以是物体的名称或编号等。图3中以物体标识为物体的名称为例,例如,可以识别到车辆、车牌、车窗、车轮、车灯、水、树和公路等。此时可以接收用户在物体列表内点击或按压车牌所在位置的操作(即生成选择指令),然后,基于用户点击或按压操作所在位置确定用户选择的是车牌,可以提取图像中车牌所在的区域作为目标物,并检测图像中车牌所在的位置,得到目标物的坐标位置。
其中,预设的识别模型可以根据实际需要进行灵活设置,例如,该识别模型可以是目标检测算法SSD或YOLO,该识别模型还可以是卷积神经网络R-CNN或Faster R-CNN等,该预设的识别模型为训练后的识别模型,例如,可以获取包含不同类型物体的多张样本图像,基于多张样本图像对识别模型进行训练,得到训练后的识别模型。
在一些实施方式中,接收用户输入的选择指令,根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置可以包括:从多张图像中选择一张图像进行显示;接收用户基于显示的图像输入触摸操作生成的选择指令;根据选择指令将触摸操作的触摸中心点设置为目标物,或者将触摸中心点所在物体区域的设置为目标物;确定目标物的坐标位置。
为了提高目标物的坐标位置确定的便捷性,可以将相机采集到的图像进行显示,此时可以显示的图像内接收用户输入的选择指令来确定目标物。例如,如图4所示,若相机采集到的图像内存在车辆时,可以接收用户手指点击或按压车辆上车牌的操作(即生成选择指令),此时,可以手指点击或按压(即触摸操作)的触摸中心点设置为目标物,或者,可以检测用户手指点击或按压操作(即触摸操作)所在的区域,并检测该区域内的物体,确定用户选择的是车牌所在的区域,然后可以提取车牌所在的区域作为目标物,检测图像中车牌所在的位置,得到目标物的坐标位置。
在一些实施方式中,接收用户输入的选择指令,根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置可以包括:接收用户输入的语音信号或手势生成的选择指令;根据选择指令从多张图像内选择与语音信号或手势对应的目标物,并确定目标物的坐标位置。
例如,用户可以输入获取目标物的坐标位置相关的语音信号,该语音信号的类型可以是中文或英文等,该语音信号的内容可以是“选择物体A为目标物”或“获取物体A的位置信息”等。在接收到用户输入的语音信号后,可以生成对目标物的选择指令,例如根据选择指令从多张图像内选择物体A作为目标物,并检测目标物在图像内的坐标位置。
又例如,可以预先设置不同手势(例如,拳头或OK手势等)与不同物体之间的映射关系,例如,可以是拳头手势对应选择车牌为目标物,剪刀手势对应选择剪刀为目标物,OK手势对应选择车辆为目标物等,用户可以输入选择目标物对应的手势,在接收到用户输入的手势,并确定该手势与图像中存在的物体B的手势一致(例如相似度大于96%等)后,可以生成将该物体B作为目标物的选择指令。此时可以根据选择指令从多张图像内选择该物体B作为目标物,并检测目标物在图像内的坐标位置。
在一些实施方式中,获取多张图像中目标物的坐标位置可以包括:对多张图像进行特征提取,得到目标特征信息;根据目标特征信息识别多张图像内的目标物,并确定目标物的坐标位置。
为了提高目标物的坐标位置确定的自动化和便捷性,可以自动检测目标物。例如,可以对相机采集到的每张图像进行特征提取,得到目标特征信息,该目标特征信息可以根据实际需要进行灵活设置,例如,目标特征信息可以是车牌的特征信息、某种动物的特征信息、某种植物的特征信息或某个人的特征信息等,根据目标特征信息可以识别相机采集到的图像内的物体以及物体所在的物体区域。例如,对于无人机搭载云台相机,在无人机起飞后,可以通过云台相机自动进行画面巡检,当通过相机采集到车辆违停的图像时,可以自动识别图像内该违停车辆的车牌所在的位置。又例如,如图5所示,可以显示文本输入框,供用户输入需要检测的目标物,此时可以接收用户在文本输入框输入的“车牌”,然后,基于用户输入的“车牌”确定用户选择的是车牌,可以通过预设的识别模型对图像中车牌进行识别,并提取图像中车牌所在的区域作为目标物,并检测目标物在图像中的坐标位置。
S104、根据坐标位置和偏移角度,确定目标物与相机之间的距离。
其中,目标物与相机之间的距离即为目标物与相机之间的深度信息。由于在多张图像采集的过程中,可能会存在一定的抖动,因此可以基于抖动量实现通过单个相机精确获取深度信息。可以将每两张图像作为一组来获取深度信息, 为了提高深度信息获取的准确性,可以保证每组图像之间对应的相机的焦距一致,以减少一组图像中由于变焦或对焦不同所造成的误差,从而实现更加精确的获取深度信息。
在一些实施方式中,根据坐标位置和偏移角度,确定目标物与相机之间的距离可以包括:获取相机的参数;根据参数、坐标位置、以及偏移角度,确定目标物与相机之间的距离。
为了提高目标物与相机之间的距离确定的准确性,可以基于相机的参数、目标物的坐标位置、以及相机的偏移角度等,通过三角原理进行三角函数运算来确定目标物与相机之间的距离。具体地,可以获取相机的焦距、传感器的像素大小以及像素间距等,得到相机的参数。此时可以根据相机的参数、目标物的坐标位置、以及相机的偏移角度,确定目标物与相机之间的距离,其中,可以仅根据目标物在任意两张张图像上的坐标位置来确定目标物与相机之间的距离,或者可以对目标物在多张图像上的坐标位置取均值,得到目标坐标位置,基于目标坐标位置确定目标物与相机之间的距离。
以两张图像为例,例如,如图6所示,可以获取相机在采集这两张图像时对应的焦距f、传感器的像素大小u以及像素间距d,以及获取目标物在其中一张图像上的坐标位置(x1,y1),目标物在另一张图像上的坐标位置(x2,y2),目标物的坐标位置(x,y,z),以及相机在采集这两张图像时对应的偏移角度a,然后可以根据偏移角度a计算x2投影在X轴上的位置x2'=x2*cos(a),此时可以根据相似三角形原理得到:
Figure PCTCN2020088843-appb-000001
其中,B表示相机在采集者两张图像时对应的距离差,z表示目标物与相机之间的距离,该距离可以是像素距离,f表示相机的焦距,x1表示目标物在其中一张图像上的橫坐标。
B的获取方式可以是:在相机采集图像的过程中,可以获取每张图像采集时所对应的时刻,然后获取相机在每个时刻对应的加速度,例如,可以将IMU检测到的加速度作为相机的加速度。此时,可以根据加速度获取对应的速度,根据速度和相机在采集者两张图像时对应的时间计算距离,即可得到相机在采集者两张图像时对应的距离差。
此时,可以根据传感器的像素大小u和像素间距d确定目标物与相机之间在 三维空间中实际的距离(即深度信息),例如,目标物与相机之间的距离可以为z*d+u。
需要说明的是,当相机在采集第一张图像时对应的焦距f1,与相机在采集第二张图像时对应的焦距f2不一致时,相机的焦距f1对应采集得到的第一张图像中目标物的橫坐标为x2',相机的焦距f2对应采集得到的第一张图像中目标物的橫坐标为x2”,此时可以根据相似三角形原理得到:
Figure PCTCN2020088843-appb-000002
此时,可以计算得到目标物与相机之间的距离z表示,该距离可以是像素距离,然后根据传感器的像素大小u和像素间距d确定目标物与相机之间在三维空间中实际的距离(即深度信息)。
S105、根据目标物与相机之间的距离,确定目标物的位置信息。
在得到目标物与相机之间的距离后,可以根据目标物与相机之间的距离确定目标物的位置信息,该位置信息可以是经纬度信息。
在一些实施方式中,根据目标物与相机之间的距离,确定目标物的位置信息可以包括:获取相机与预设的定位装置之间的物理距离;根据物理距离、定位装置定位得到的位置、目标物与相机之间的距离,确定目标物的位置信息。
为了提高目标物的位置信息获取的精准性和可靠性,可以预先设置定位装置,该定位装置可以是GPS,该定位装置可以通过自身的定位功能进行定位以得到位置。此时,如图7所示(图7为XY方向上的二维平面图),可以获取相机与预设的定位装置之间的物理距离L1,即相机的安装位置与定位装置之间的距离。然后可以根据相机与预设的定位装置之间的物理距离L1、以及定位装置定位得到的位置(X1,Y1,Z1)确定相机的位置信息(X2,Y2,Z2),其中,当相机和定位装置在同一水平线上时,可以得到X2=X1+L1,Y2=Y1以及Z2=Z1,即可得到相机的位置信息为(X1+L1,Y1,Z1),当相机和定位装置不在同一水平线上时,如图8所示(图7为XY方向上的二维平面图),可以获取相机和定位装置之间的夹角θ,然后根据夹角θ、物理距离L1和定位装置定位得到的位置(X1,Y1,Z1)计算得到X2=X1+L1*sin(θ),以及Y2=Y1+L1*cos(θ),即可得到相机的位置信息为(X1+L1*sin(θ),Y1+L1*cos(θ),Z1)。最后可以根据相机的位置信息(X2,Y2,Z2)、以及目标物与相机之间的距离L2,确定目标物的位置信息(X3,Y3,Z3),例如,X3=X2,Y3=Y2+L2, Z3=Z1。
在一些实施方式中,相机和定位装置在位置信息获取***中的相对位置固定,这样可以减少位置信息获取***的复杂度及降低位置信息获取***的运算资源消耗。在一些实施方式中,相机和定位装置在位置信息获取***中的相对位置可变,例如,可以根据实际需要调节相机和定位装置的相对位置,这样的方式能够保证较好的用户体验。
在一些实施方式中,目标物的位置信息包括多个,确定目标物的位置信息之后,位置信息获取方法还可以包括:根据目标物的多个位置信息所形成目标物的移动轨迹;获取移动轨迹的起始时间和终止时间,根据起始时间和终止时间确定移动时间;根据移动轨迹和移动时间确定目标物的移动速度。
为了提高对目标物相关信息获取的丰富性,目标物在移动的过程中,可以获取目标物的移动速度,为了提高目标物的移动速度获取的便捷性,可以基于目标物的多个位置信息所形成目标物的移动轨迹来确定目标物的移动速度。具体地,由于相机可以采集得到多张图像,因此可以将每两张图像划分为一组,得到多组图像,然后根据每组图像可以按照上述方式计算得到目标物的一个位置信息,目标物在移动的过程中,由多组图像可以对应计算得到目标物的多个位置信息,此时目标物的多个位置信息之间的连线可以形成目标物的移动轨迹。其次,获取移动轨迹上起始点(即第一个位置信息)对应的起始时间,以及获取移动轨迹上终止点(即最后一个位置信息)对应的终止时间,最后可以根据起始时间和终止时间之间的时间差确定目标物的移动时间,根据目标物的移动轨迹s和目标物的移动时间t确定目标物的移动速度v=s/t。
在一些实施方式中,位置信息获取方法还可以包括:获取目标物的移动方位。例如,可以通过定位装置定位自身的移动方向,将定位装置的移动方向作为相机的移动方向,然后根据相机的移动方向,以及相机采集到包含目标物的多张图像中目标物的相对位置变化,确定目标物的移动方位。
需说明的是,还可以检测相机与地面之间的第一距离,以及检测相机与目标物之间第二距离,根据第一距离和第二距离之间的差值确定目标物的高度。
在一些实施方式中,根据移动轨迹和移动时间确定目标物的移动速度之后,位置信息获取方法还可以包括:输出目标物的移动速度和移动方位。
在得到目标物的移动速度和移动方位后,为了方便有需要的用户查看,可以通过语音或显示屏等输出目标物的移动速度和移动方位。例如,可以通过相 机采集目标物所在的图像,在显示屏内显示该图像以及目标物的移动速度和移动方位,或者将目标物的移动速度和移动方位发送给移动终端,并控制移动终端输出目标物在相机采集到的图像上的移动速度和移动方位;等等。又例如,在安防领域,在无人机巡检中如发现问题,需要和地面的遥控终端联动时,可以获取目标物的位置信息、移动速度以及移动方位等数据,并将数据反馈至遥控终端,可以极高的提高空地联动效率,以及提高指挥效率。
在一些实施方式中,输出目标物的移动速度和移动方位可以包括:在显示的地图界面内显示目标物的移动速度和移动方位。
为了提高移动速度和移动方位输出的灵活性,可以在地图界面内显示目标物的移动速度和移动方位。例如,可以通过无人机加载的相机采集视野范围内的地面信息,根据地面信息生成地图界面,然后可以在显示界面内显示地图界面内,并在显示的地图界面内,基于目标物的位置信息对目标物进行标识,并显示移动速度和移动方位,例如可以闪烁显示目标物,或者通过预设颜色标出目标物等,以及显示目标物的移动速度和移动方位。
在一些实施方式中,输出目标物的移动速度和移动方位可以包括:通过语音播报目标物的移动速度和移动方位。
为了提高移动速度和移动方位输出的便捷性,可以通过语音播报目标物的移动速度和移动方位,其中,语音播报的分贝大小、以及语音播报的语言(如中文或英文)等可以根据实际需要进行灵活设置。此时,可以在语音播报的循环次数达到预设次数后自动关闭,或者由用户点击关闭按钮进行关闭等,该预设次数可以根据实际需要进行灵活设置。
在一些实施方式中,输出目标物的移动速度和移动方位可以包括:在显示界面内弹窗显示目标物的移动速度和移动方位。
为了提高目标物的移动速度和移动方位输出的灵活性,可以在显示界面内弹窗显示目标物的移动速度和移动方位,其中,弹窗的大小、背景颜色以及显示位置等可以根据实际需要进行灵活设置。此时,弹窗显示的对话框可以在显示时间达到预设时间后自动关闭,或者由用户点击右上角的关闭按钮进行关闭等,该预设时间可以根据实际需要进行灵活设置。
在一些实施方式中,输出目标物的移动速度和移动方位可以包括:在显示界面内显示目标物所在的相机采集到的图像,在图像内显示目标物的移动速度和移动方位。
为了提高位置信息输出的多样性,可以在显示界面内显示目标物所在的相机采集到的图像,以及在该图像内标注目标物所在的区域,并在显示界面内显示目标物的位置信息,以及在图像内显示目标物的移动速度和移动方位。
在一些实施方式中,确定目标物的位置信息之后,位置信息获取方法还可以包括:输出目标物的位置信息。
在得到目标物的位置信息后,为了方便有需要的用户查看,可以通过语音或显示屏等输出目标物的位置信息。例如,可以通过相机采集目标物所在的图像,在显示屏内显示该图像以及目标物的位置信息,或者将目标物的位置信息发送给移动终端,并控制移动终端输出目标物在相机采集到的图像上的位置信息;等等。
其中,移动终端可以包括手机或电脑等终端,为了提高位置信息输出的便捷性和灵活性,在得到目标物的位置信息后,可以主动将目标物的位置信息发送给移动终端,例如,可以将携带目标物的位置信息和图像等信息的控制指令发送给移动终端,并基于控制指令控制移动终端在显示屏内显示该图像,以及通过语音输出目标物的位置信息,或通过显示屏显示目标物在相机采集到的图像上的位置信息。或者,在得到目标物的位置信息后,可以将目标物的位置信息进行存储,以及侦测是否接收到移动终端发送的获取请求,当接收到移动终端发送的获取请求时,可以基于该获取请求将目标物的位置信息发送给移动终端,例如,可以基于该获取请求将携带目标物的位置信息和图像等信息的控制指令发送给移动终端,并基于控制指令控制移动终端在显示屏内显示或语音播报目标物在相机采集到的图像上的位置信息。
为了提高目标物的位置信息输出的灵活性,可以将目标物的位置信息发送至预设邮箱或即时通信窗口(例如小程序、公众号、指定的QQ窗口、或指定的微信窗口),其中,邮箱的类型或即时通信的类型等可以根据实际需要进行灵活设置。
需要说明的是,当相机为特定类型的相机时,除了可以采集图像之外,还可相应计算或显示目标物的其他信息,例如目标物相对于相机的距离,或者目标物的温度,或者目标物的高度信息等,此时除了可以输出目标物的位置信息之外,还可以输出目标物的温度或高度等信息。
需要说明的是,可以接收用户输入的设置指令,根据设置指令对目标物的位置信息的输出方式进行设置。
在一些实施方式中,输出目标物的位置信息可以包括:在显示的地图界面内,标识目标物,并显示目标物的位置信息。
为了提高位置信息输出的灵活性,可以在地图界面内显示目标物的位置信息。例如,可以通过无人机加载的相机采集视野范围内的地面信息,根据地面信息生成地图界面,然后可以在显示界面内显示地图界面内,并在显示的地图界面内,基于目标物的位置信息对目标物进行标识,例如可以闪烁显示目标物,或者通过预设颜色标出目标物等,以及显示目标物的位置信息。
在一些实施方式中,输出目标物的位置信息可以包括:通过语音播报目标物的位置信息。
为了提高位置信息输出的便捷性,可以通过语音播报目标物的位置信息,其中,语音播报的分贝大小、以及语音播报的语言(如中文或英文)等可以根据实际需要进行灵活设置。此时,可以在语音播报的循环次数达到预设次数后自动关闭,或者由用户点击关闭按钮进行关闭等,该预设次数可以根据实际需要进行灵活设置。
在一些实施方式中,输出目标物的位置信息可以包括:在显示界面内弹窗显示目标物的位置信息。
为了提高目标物的位置信息输出的灵活性,可以在显示界面内弹窗显示目标物的位置信息,其中,弹窗的大小、背景颜色以及显示位置等可以根据实际需要进行灵活设置。例如,如图9所示,可以弹窗显示车辆的位置信息为(X,Y)。此时,弹窗显示的对话框可以在显示时间达到预设时间后自动关闭,或者由用户点击右上角的关闭按钮进行关闭等,该预设时间可以根据实际需要进行灵活设置。
在一些实施方式中,输出目标物的位置信息可以包括:在显示界面内显示目标物所在的相机采集到的图像,在图像内标注目标物所在的区域,并在显示界面内显示目标物的位置信息。
为了提高位置信息输出的多样性,可以在显示界面内显示目标物所在的相机采集到的图像,以及在该图像内标注目标物所在的区域,并在显示界面内下方或上方等区域显示目标物的位置信息。或者,可以将目标物的位置信息发送给外设的显示装置(例如手机或电脑等),并控制显示装置在显示屏内显示相机采集到的图像,以及在该图像内标注目标物所在的区域,并在显示界面内下方或上方等区域显示目标物的位置信息。
在一些实施方式中,在图像内标注目标物所在的区域可以包括:在图像内确定目标物的中心位置;根据中心位置绘制与目标物外切的多边形或圆形,以区别于图像内目标物的背景颜色的目标颜色,设置为多边形或圆形的颜色,得到目标物所在的区域。
为了醒目地显示目标物所在的区域,以便用户快速查看目标物,例如,如图10所示,可以在图像内确定目标物的中心位置,如球的中心位置,根据目标物的中心位置绘制与目标物外切的多边形或圆形等,图10中绘制与球外切的四边形,以区别于图像内目标物的背景颜色的目标颜色,设置为多边形或圆形的颜色,得到目标物所在的区域,其中,目标颜色可以根据实际需要进行灵活设置。
在实际应用中,还可以采集当前查看位置信息的用户的指纹信息,根据指纹信息来确定用户的身份信息,或者,通过摄像头采集当前查看位置信息的用户的人脸图像,通过人脸图像确定用户的身份信息等。在确定用户的身份信息后,可以根据用户的身份信息确定该用户的使用习惯(例如习惯以红色标注目标物)或特征(例如该用户是否为红绿色盲等)等,此时可以根据该用户的使用习惯或特征等确定标注目标物的颜色。
在一些实施方式中,在图像内标注目标物所在的区域可以包括:从图像中提取目标物的轮廓;根据轮廓以预设颜色标注目标物所在的区域。
为了提高目标物标注准确性,以及突出显示目标物所在的区域,以便用户快速查看目标物,例如,如图11所示,可以从图像中提取目标物的轮廓,如图11提取球的轮廓,根据轮廓以预设颜色标注目标物所在的区域,其中,预设颜色可以根据实际需要进行灵活设置。还可以在设置界面内接收用户输入的颜色设置指令,根据颜色设置指令对目标颜色或预设颜色进行设置等。
需要说明的是,位置信息获取方法还可以包括:接收录制指令、拍摄指令、测距指令或测温指令;根据录制指令控制相机对采集到的目标物的画面进行录制,或者根据拍摄指令控制相机对采集到的目标物的画面进行拍摄,或者根据测距指令控制相机对目标物进行测距,或者根据测温指令控制相机对目标物进行测温。
在一些实施方式中,获取相机在采集多张图像时的偏移角度之后,位置信息获取方法还可以包括:根据偏移角度获取相邻两张图像之间的画面抖动量;按照采集的时间顺序依次显示多张图像,并根据画面抖动量对显示的图像进行 矫正。
为了保证图像显示的稳定性,在采集到多张图像,以及确定相机在采集多张图像时的偏移角度后,可以将相机在采集相邻两张图像时的偏移角度,作为相邻两张图像之间的画面抖动量,该画面抖动量可以是具有方向的矢量值,然后在按照图像采集的时间先后顺序,依次显示多张图像的过程中,可以基于画面抖动量对显示的图像进行矫正,即将图像按照画面抖动量进行移动。例如,相机按照时间先后顺序依次采集到第一张图像、第二张图像和第三张图像,可以按照上述方式确定相机采集第一张图像和第二张图像时对应的第一偏移角度,确定相机采集第二张图像和第三张图像时对应的第二偏移角度,将第一偏移角度作为第一张图像和第二张图像之间的画面抖动量,以及将第二偏移角度作为第二张图像和第三张图像之间的画面抖动量。此时在显示第二张图像时,可以根据第一张图像和第二张图像之间的画面抖动量对第二张图像的显示位置进行矫正,以及在显示第三张图像时,可以根据第二张图像和第三张图像之间的画面抖动量对第三张图像的显示位置进行矫正,例如,将第三张图像向左、向右、向上或向下移动等。
需要说明的是,为了提高矫正的准确性,可以提高图像采集的帧率,以提高图像矫正频率,减小画面抖动量,提高图像显示的稳定性。
本申请实施例可以通过相机采集多张图像,获取相机在采集多张图像时的偏移角度,以及获取多张图像中目标物的坐标位置,然后可以根据坐标位置和偏移角度,确定目标物与相机之间的距离,此时可以根据目标物与相机之间的距离,确定目标物的位置信息。该方案通过单个相机实现对目标物的位置信息的获取,降低了成本,以及基于偏移角度、目标物与相机之间的距离等准确获取目标物的位置信息,提高了位置信息获取的准确性。
请参阅图12,图12是本申请一实施例提供的位置信息获取***的示意性框图。该位置信息获取***11可以包括处理器111和存储器112,处理器111和存储器112通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器111可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器112可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,可以用于存储计算机程序。
其中,处理器111用于调用存储在存储器112中的计算机程序,并在执行计算机程序时实现本申请实施例提供的位置信息获取方法,例如可以执行如下步骤:
通过相机采集多张图像,获取相机在采集多张图像时的偏移角度;获取多张图像中目标物的坐标位置,根据坐标位置和偏移角度,确定目标物与相机之间的距离;根据目标物与相机之间的距离,确定目标物的位置信息。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对位置信息获取方法的详细描述,此处不再赘述。
请参阅图13,图13是本申请一实施例提供的遥控终端的示意性框图。该遥控终端12可以包括处理器121和存储器122,处理器121和存储器122通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器121可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器122可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,可以用于存储计算机程序。
遥控终端12还可以包括显示器123等,用于显示图像以及目标物的位置信息等,该显示器123还可以显示目标物的移动速度以及目标物的移动方位等其他信息,具体内容在此处不作限定。
其中,处理器121用于调用存储在存储器122中的计算机程序,并在执行计算机程序时实现本申请实施例提供的位置信息获取方法,例如可以执行如下步骤:
通过相机采集多张图像,获取相机在采集多张图像时的偏移角度;获取多张图像中目标物的坐标位置,根据坐标位置和偏移角度,确定目标物与相机之间的距离;根据目标物与相机之间的距离,确定目标物的位置信息。还可以通过显示器123显示目标物的位置信息。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对位置信息获取方法的详细描述,此处不再赘述。
请参阅图14,图14是本申请一实施例提供的云台相机的示意性框图。该云台相机13可以包括处理器131和存储器132,处理器131和存储器132通过 总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器131可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器132可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,可以用于存储计算机程序。
云台相机13还可以包括相机133和云台134等,相机133采集包含目标物的图像,云台134用于搭载相机133,以带动相机133移动至合适位置,准确采集所需的图像等。
其中,处理器131用于调用存储在存储器132中的计算机程序,并在执行计算机程序时实现本申请实施例提供的位置信息获取方法,例如可以执行如下步骤:
通过相机采集多张图像,获取相机在采集多张图像时的偏移角度;获取多张图像中目标物的坐标位置,根据坐标位置和偏移角度,确定目标物与相机之间的距离;根据目标物与相机之间的距离,确定目标物的位置信息。
在一些实施方式中,在获取相机在采集多张图像时的偏移角度时,处理器131还执行:获取多张图像采集的时刻,得到多个时刻;获取相机在多个时刻对应的角速度;根据多个时刻对应的角速度确定相机的偏移角度。
在一些实施方式中,在获取相机在多个时刻对应的角速度时,处理器131还执行:通过预设的角速度传感器获取每个时刻相机在三轴方向上对应的角速度。
在一些实施方式中,在获取多张图像中目标物的坐标位置时,处理器131还执行:接收用户输入的选择指令;根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置。
在一些实施方式中,在接收用户输入的选择指令,根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置时,处理器131还执行:对多张图像进行物体识别,得到至少一个物体对应的物体标识;显示包含至少一个物体标识的物体列表;接收用户基于物体列表输入的选择指令;根据选择指令从物体列表中选择一个物体标识;根据物体标识从多张图像中确定目标物,并确定目标物的坐标位置。
在一些实施方式中,在接收用户输入的选择指令,根据选择指令从多张图 像内选择目标物,并确定目标物的坐标位置时,处理器131还执行:从多张图像中选择一张图像进行显示;接收用户基于显示的图像输入触摸操作生成的选择指令;根据选择指令将触摸操作的触摸中心点设置为目标物,或者将触摸中心点所在物体区域的设置为目标物;确定目标物的坐标位置。
在一些实施方式中,在接收用户输入的选择指令,根据选择指令从多张图像内选择目标物,并确定目标物的坐标位置时,处理器131还执行:接收用户输入的语音信号或手势生成的选择指令;根据选择指令从多张图像内选择与语音信号或手势对应的目标物,并确定目标物的坐标位置。
在一些实施方式中,在获取多张图像中目标物的坐标位置时,处理器131还执行:对多张图像进行特征提取,得到目标特征信息;根据目标特征信息识别多张图像内的目标物,并确定目标物的坐标位置。
在一些实施方式中,在根据坐标位置和偏移角度,确定目标物与相机之间的距离时,处理器131还执行:获取相机的参数;根据参数、坐标位置、以及偏移角度,确定目标物与相机之间的距离。
在一些实施方式中,在获取相机的参数时,处理器131还执行:获取相机的焦距、传感器的像素大小以及像素间距,得到相机的参数。
在一些实施方式中,在根据目标物与相机之间的距离,确定目标物的位置信息时,处理器131还执行:获取相机与预设的定位装置之间的物理距离;根据物理距离、定位装置定位得到的位置、目标物与相机之间的距离,确定目标物的位置信息。
在一些实施方式中,目标物的位置信息包括多个,在确定目标物的位置信息之后,处理器131还执行:根据目标物的多个位置信息所形成目标物的移动轨迹;获取移动轨迹的起始时间和终止时间,根据起始时间和终止时间确定移动时间;根据移动轨迹和移动时间确定目标物的移动速度。
在一些实施方式中,处理器131还执行:获取目标物的移动方位。
在一些实施方式中,在根据移动轨迹和移动时间确定目标物的移动速度之后,处理器131还执行:输出目标物的移动速度和移动方位。
在一些实施方式中,在输出目标物的移动速度和移动方位时,处理器131还执行:在显示的地图界面内显示目标物的移动速度和移动方位;通过语音播报目标物的移动速度和移动方位;或者,在显示界面内弹窗显示目标物的移动速度和移动方位;或者,在显示界面内显示目标物所在的相机采集到的图像, 在图像内显示目标物的移动速度和移动方位。
在一些实施方式中,在确定目标物的位置信息之后,处理器131还执行:输出目标物的位置信息。
在一些实施方式中,在输出目标物的位置信息时,处理器131还执行:在显示的地图界面内,标识目标物,并显示目标物的位置信息。
在一些实施方式中,在输出目标物的位置信息时,处理器131还执行:通过语音播报目标物的位置信息;或者,在显示界面内弹窗显示目标物的位置信息;或者,在显示界面内显示目标物所在的相机采集到的图像,在图像内标注目标物所在的区域,并在显示界面内显示目标物的位置信息。
在一些实施方式中,在图像内标注目标物所在的区域时,处理器131还执行:在图像内确定目标物的中心位置;根据中心位置绘制与目标物外切的多边形或圆形,以区别于图像内目标物的背景颜色的目标颜色,设置为多边形或圆形的颜色,得到目标物所在的区域。
在一些实施方式中,在图像内标注目标物所在的区域时,处理器131还执行:从图像中提取目标物的轮廓;根据轮廓以预设颜色标注目标物所在的区域。
在一些实施方式中,在获取相机在采集多张图像时的偏移角度之后,处理器131还执行:根据偏移角度获取相邻两张图像之间的画面抖动量;按照采集的时间顺序依次显示多张图像,并根据画面抖动量对显示的图像进行矫正。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对位置信息获取方法的详细描述,此处不再赘述。
请参阅图15,图15是本申请一实施例提供的可移动平台的示意性框图。该可移动平台14可以包括处理器141和存储器142,处理器141和存储器142通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器141可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器142可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,可以用于存储计算机程序。
可移动平台14还可以包括相机143等,相机143采集包含目标物的图像,移动平台14还可以包括用于搭载相机的云台,该云台可以带动相机133移动至合适位置,准确采集所需的图像等。可移动平台14的类型可以根据实际需要进行灵 活设置,例如,该可移动平台14可以是动终端、无人机、机器人或车辆等,该车辆可以是无人驾驶车辆。
其中,处理器141用于调用存储在存储器142中的计算机程序,并在执行计算机程序时实现本申请实施例提供的位置信息获取方法,例如可以执行如下步骤:
通过相机采集多张图像,获取相机在采集多张图像时的偏移角度;获取多张图像中目标物的坐标位置,根据坐标位置和偏移角度,确定目标物与相机之间的距离;根据目标物与相机之间的距离,确定目标物的位置信息。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对位置信息获取方法的详细描述,此处不再赘述。
本申请的实施例中还提供一种计算机程序,该计算机程序中包括程序指令,处理器执行程序指令,实现本申请实施例提供的位置信息获取方法。
本申请的实施例中还提供一种存储介质,该存储介质为计算机可读存储介质,该存储介质存储有计算机程序,计算机程序中包括程序指令,处理器执行程序指令,实现本申请实施例提供的位置信息获取方法。
其中,存储介质可以是前述任一实施例所述的无人机状态管理***或遥控终端的内部存储单元,例如遥控终端的硬盘或内存。存储介质也可以是遥控终端的外部存储设备,例如遥控终端上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
由于该存储介质中所存储的计算机程序,可以执行本申请实施例所提供的任一种位置信息获取方法,因此,可以实现本申请实施例所提供的任一种位置信息获取方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者***不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还 包括为这种过程、方法、物品或者***所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者***中还存在另外的相同要素。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (27)

  1. 一种位置信息获取方法,其特征在于,包括:
    通过相机采集多张图像;
    获取所述相机在采集所述多张图像时的偏移角度;
    获取所述多张图像中目标物的坐标位置;
    根据所述坐标位置和所述偏移角度,确定所述目标物与所述相机之间的距离;
    根据所述目标物与所述相机之间的距离,确定所述目标物的位置信息。
  2. 根据权利要求1所述的位置信息获取方法,其特征在于,所述获取所述相机在采集所述多张图像时的偏移角度包括:
    获取所述多张图像采集的时刻,得到多个时刻;
    获取所述相机在所述多个时刻对应的角速度;
    根据所述多个时刻对应的角速度确定所述相机的偏移角度。
  3. 根据权利要求2所述的位置信息获取方法,其特征在于,所述获取所述相机在所述多个时刻对应的角速度包括:
    通过预设的角速度传感器获取每个时刻所述相机在三轴方向上对应的角速度。
  4. 根据权利要求1所述的位置信息获取方法,其特征在于,所述获取所述多张图像中目标物的坐标位置包括:
    接收用户输入的选择指令;
    根据所述选择指令从所述多张图像内选择目标物,并确定所述目标物的坐标位置。
  5. 根据权利要求4所述的位置信息获取方法,其特征在于,所述接收用户输入的选择指令,根据所述选择指令从所述多张图像内选择目标物,并确定所述目标物的坐标位置包括:
    对所述多张图像进行物体识别,得到至少一个物体对应的物体标识;
    显示包含至少一个物体标识的物体列表;
    接收用户基于所述物体列表输入的选择指令;
    根据所述选择指令从所述物体列表中选择一个物体标识;
    根据所述物体标识从所述多张图像中确定目标物,并确定所述目标物的坐标位置。
  6. 根据权利要求4所述的位置信息获取方法,其特征在于,所述接收用户输入的选择指令,根据所述选择指令从所述多张图像内选择目标物,并确定所述目标物的坐标位置包括:
    从所述多张图像中选择一张图像进行显示;
    接收用户基于显示的所述图像输入触摸操作生成的选择指令;
    根据所述选择指令将所述触摸操作的触摸中心点设置为目标物,或者将所述触摸中心点所在物体区域的设置为目标物;
    确定所述目标物的坐标位置。
  7. 根据权利要求4所述的位置信息获取方法,其特征在于,所述接收用户输入的选择指令,根据所述选择指令从所述多张图像内选择目标物,并确定所述目标物的坐标位置包括:
    接收用户输入的语音信号或手势生成的选择指令;
    根据所述选择指令从所述多张图像内选择与所述语音信号或手势对应的目标物,并确定所述目标物的坐标位置。
  8. 根据权利要求1所述的位置信息获取方法,其特征在于,所述获取所述多张图像中目标物的坐标位置包括:
    对所述多张图像进行特征提取,得到目标特征信息;
    根据所述目标特征信息识别所述多张图像内的目标物,并确定所述目标物的坐标位置。
  9. 根据权利要求1所述的位置信息获取方法,其特征在于,所述根据所述坐标位置和所述偏移角度,确定所述目标物与所述相机之间的距离包括:
    获取所述相机的参数;
    根据所述参数、所述坐标位置、以及所述偏移角度,确定所述目标物与所述相机之间的距离。
  10. 根据权利要求9所述的位置信息获取方法,其特征在于,所述获取所述相机的参数包括:
    获取所述相机的焦距、传感器的像素大小以及像素间距,得到所述相机的参数。
  11. 根据权利要求1所述的位置信息获取方法,其特征在于,所述根据所述 目标物与所述相机之间的距离,确定所述目标物的位置信息包括:
    获取所述相机与预设的定位装置之间的物理距离;
    根据所述物理距离、所述定位装置定位得到的位置、所述目标物与所述相机之间的距离,确定所述目标物的位置信息。
  12. 根据权利要求1至11任一项所述的位置信息获取方法,其特征在于,所述目标物的位置信息包括多个,所述确定所述目标物的位置信息之后,所述位置信息获取方法还包括:
    根据所述目标物的多个位置信息所形成所述目标物的移动轨迹;
    获取所述移动轨迹的起始时间和终止时间,根据所述起始时间和所述终止时间确定移动时间;
    根据所述移动轨迹和所述移动时间确定所述目标物的移动速度。
  13. 根据权利要求12所述的位置信息获取方法,其特征在于,所述位置信息获取方法还包括:
    获取所述目标物的移动方位。
  14. 根据权利要求13所述的位置信息获取方法,其特征在于,所述根据所述移动轨迹和所述移动时间确定所述目标物的移动速度之后,所述位置信息获取方法还包括:
    输出所述目标物的移动速度和移动方位。
  15. 根据权利要求14所述的位置信息获取方法,其特征在于,所述输出所述目标物的移动速度和移动方位包括:
    在显示的地图界面内显示所述目标物的移动速度和移动方位;
    通过语音播报所述目标物的移动速度和移动方位;或者,
    在显示界面内弹窗显示所述目标物的移动速度和移动方位;或者,
    在显示界面内显示所述目标物所在的所述相机采集到的图像,在所述图像内显示所述目标物的移动速度和移动方位。
  16. 根据权利要求1至11任一项所述的位置信息获取方法,其特征在于,所述确定所述目标物的位置信息之后,所述位置信息获取方法还包括:
    输出所述目标物的位置信息。
  17. 根据权利要求16所述的位置信息获取方法,其特征在于,所述输出所述目标物的位置信息包括:
    在显示的地图界面内,标识所述目标物,并显示所述目标物的位置信息。
  18. 根据权利要求16所述的位置信息获取方法,其特征在于,所述输出所述目标物的位置信息包括:
    通过语音播报所述目标物的位置信息;或者,
    在显示界面内弹窗显示所述目标物的位置信息;或者,
    在显示界面内显示所述目标物所在的所述相机采集到的图像,在所述图像内标注所述目标物所在的区域,并在显示界面内显示所述目标物的位置信息。
  19. 根据权利要求18所述的位置信息获取方法,其特征在于,所述在所述图像内标注所述目标物所在的区域包括:
    在所述图像内确定所述目标物的中心位置;
    根据所述中心位置绘制与所述目标物外切的多边形或圆形,以区别于所述图像内所述目标物的背景颜色的目标颜色,设置为所述多边形或圆形的颜色,得到所述目标物所在的区域。
  20. 根据权利要求18所述的位置信息获取方法,其特征在于,所述在所述图像内标注所述目标物所在的区域包括:
    从所述图像中提取所述目标物的轮廓;
    根据所述轮廓以预设颜色标注所述目标物所在的区域。
  21. 根据权利要求1至11任一项所述的位置信息获取方法,其特征在于,所述获取所述相机在采集所述多张图像时的偏移角度之后,所述位置信息获取方法还包括:
    根据所述偏移角度获取相邻两张图像之间的画面抖动量;
    按照采集的时间顺序依次显示所述多张图像,并根据所述画面抖动量对显示的图像进行矫正。
  22. 一种位置信息获取***,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于调用所述存储器中的计算机程序,以执行如权利要求1至21任一项所述的位置信息获取方法。
  23. 一种遥控终端,其特征在于,包括:
    显示器,用于显示图像以及显示所述目标物的位置信息;
    存储器,用于存储计算机程序;
    处理器,用于调用所述存储器中的计算机程序,以执行如权利要求1至21任一项所述的位置信息获取方法。
  24. 一种云台相机,其特征在于,包括:
    相机,用于采集图像;
    云台,用于搭载所述相机;
    存储器,用于存储计算机程序;
    处理器,用于调用所述存储器中的计算机程序,以执行如权利要求1至21任一项所述的位置信息获取方法。
  25. 一种可移动平台,其特征在于,包括:
    相机,用于采集多张图像;
    存储器,用于存储计算机程序;
    处理器,用于调用所述存储器中的计算机程序,以执行如权利要求1至21任一项所述的位置信息获取方法。
  26. 根据权利要求25所述的可移动平台,其特征在于,所述可移动平台为移动终端、无人机、机器人或车辆。
  27. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器加载以执行权利要求1至21任一项所述的位置信息获取方法。
PCT/CN2020/088843 2020-05-06 2020-05-06 位置信息获取方法、设备及存储介质 WO2021223124A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/088843 WO2021223124A1 (zh) 2020-05-06 2020-05-06 位置信息获取方法、设备及存储介质
CN202080005236.8A CN112771576A (zh) 2020-05-06 2020-05-06 位置信息获取方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/088843 WO2021223124A1 (zh) 2020-05-06 2020-05-06 位置信息获取方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021223124A1 true WO2021223124A1 (zh) 2021-11-11

Family

ID=75699519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/088843 WO2021223124A1 (zh) 2020-05-06 2020-05-06 位置信息获取方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112771576A (zh)
WO (1) WO2021223124A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327058A (zh) * 2021-12-24 2022-04-12 海信集团控股股份有限公司 显示设备
CN114627186A (zh) * 2022-03-16 2022-06-14 杭州浮点智能信息技术有限公司 一种测距方法及测距装置
CN115291624A (zh) * 2022-07-11 2022-11-04 广州中科云图智能科技有限公司 一种无人机定位降落方法、存储介质和计算机设备
CN115359447A (zh) * 2022-08-01 2022-11-18 浙江有色地球物理技术应用研究院有限公司 公路隧道远程监控***
CN115409888A (zh) * 2022-08-22 2022-11-29 北京御航智能科技有限公司 配网无人机巡检中杆塔智能定位的方法及装置
WO2023160301A1 (zh) * 2022-02-23 2023-08-31 杭州萤石软件有限公司 物体信息确定方法、移动机器人***及电子设备
CN117308967A (zh) * 2023-11-30 2023-12-29 中船(北京)智能装备科技有限公司 一种目标对象位置信息的确定方法、装置及设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192139A (zh) * 2021-05-14 2021-07-30 浙江商汤科技开发有限公司 定位方法及装置、电子设备和存储介质
CN117837153A (zh) * 2021-10-15 2024-04-05 深圳市大疆创新科技有限公司 一种拍摄控制方法、拍摄控制装置及可移动平台

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130201326A1 (en) * 2012-01-23 2013-08-08 Hiroshi Tsujii Single camera image processing apparatus, method, and program
CN105340258A (zh) * 2013-06-28 2016-02-17 夏普株式会社 位置检测装置
CN105588543A (zh) * 2014-10-22 2016-05-18 中兴通讯股份有限公司 一种基于摄像头实现定位的方法、装置及定位***
CN105959529A (zh) * 2016-04-22 2016-09-21 首都师范大学 一种基于全景相机的单像自定位方法及***
CN106570903A (zh) * 2016-10-13 2017-04-19 华南理工大学 一种基于rgb‑d摄像头的视觉识别与定位方法
CN108663043A (zh) * 2018-05-16 2018-10-16 北京航空航天大学 基于单个相机辅助的分布式pos主子节点相对位姿测量方法
CN110849285A (zh) * 2019-11-20 2020-02-28 上海交通大学 基于单目相机的焊点深度测量方法、***及介质
CN110929567A (zh) * 2019-10-17 2020-03-27 北京全路通信信号研究设计院集团有限公司 基于单目相机监控场景下目标的位置速度测量方法及***

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463899B (zh) * 2014-12-31 2017-09-22 北京格灵深瞳信息技术有限公司 一种目标对象检测、监控方法及其装置
CN105376484A (zh) * 2015-11-04 2016-03-02 深圳市金立通信设备有限公司 一种图像处理方法及终端
WO2018056802A1 (en) * 2016-09-21 2018-03-29 Universiti Putra Malaysia A method for estimating three-dimensional depth value from two-dimensional images
CN107025666A (zh) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 基于单摄像头的深度检测方法及装置和电子装置
CN109664301B (zh) * 2019-01-17 2022-02-01 中国石油大学(北京) 巡检方法、装置、设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130201326A1 (en) * 2012-01-23 2013-08-08 Hiroshi Tsujii Single camera image processing apparatus, method, and program
CN105340258A (zh) * 2013-06-28 2016-02-17 夏普株式会社 位置检测装置
CN105588543A (zh) * 2014-10-22 2016-05-18 中兴通讯股份有限公司 一种基于摄像头实现定位的方法、装置及定位***
CN105959529A (zh) * 2016-04-22 2016-09-21 首都师范大学 一种基于全景相机的单像自定位方法及***
CN106570903A (zh) * 2016-10-13 2017-04-19 华南理工大学 一种基于rgb‑d摄像头的视觉识别与定位方法
CN108663043A (zh) * 2018-05-16 2018-10-16 北京航空航天大学 基于单个相机辅助的分布式pos主子节点相对位姿测量方法
CN110929567A (zh) * 2019-10-17 2020-03-27 北京全路通信信号研究设计院集团有限公司 基于单目相机监控场景下目标的位置速度测量方法及***
CN110849285A (zh) * 2019-11-20 2020-02-28 上海交通大学 基于单目相机的焊点深度测量方法、***及介质

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327058A (zh) * 2021-12-24 2022-04-12 海信集团控股股份有限公司 显示设备
CN114327058B (zh) * 2021-12-24 2023-11-10 海信集团控股股份有限公司 显示设备
WO2023160301A1 (zh) * 2022-02-23 2023-08-31 杭州萤石软件有限公司 物体信息确定方法、移动机器人***及电子设备
CN114627186A (zh) * 2022-03-16 2022-06-14 杭州浮点智能信息技术有限公司 一种测距方法及测距装置
CN115291624A (zh) * 2022-07-11 2022-11-04 广州中科云图智能科技有限公司 一种无人机定位降落方法、存储介质和计算机设备
CN115291624B (zh) * 2022-07-11 2023-11-28 广州中科云图智能科技有限公司 一种无人机定位降落方法、存储介质和计算机设备
CN115359447A (zh) * 2022-08-01 2022-11-18 浙江有色地球物理技术应用研究院有限公司 公路隧道远程监控***
CN115409888A (zh) * 2022-08-22 2022-11-29 北京御航智能科技有限公司 配网无人机巡检中杆塔智能定位的方法及装置
CN115409888B (zh) * 2022-08-22 2023-11-17 北京御航智能科技有限公司 配网无人机巡检中杆塔智能定位的方法及装置
CN117308967A (zh) * 2023-11-30 2023-12-29 中船(北京)智能装备科技有限公司 一种目标对象位置信息的确定方法、装置及设备
CN117308967B (zh) * 2023-11-30 2024-02-02 中船(北京)智能装备科技有限公司 一种目标对象位置信息的确定方法、装置及设备

Also Published As

Publication number Publication date
CN112771576A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2021223124A1 (zh) 位置信息获取方法、设备及存储介质
US11914370B2 (en) System and method for providing easy-to-use release and auto-positioning for drone applications
CN112567201B (zh) 距离测量方法以及设备
US11635775B2 (en) Systems and methods for UAV interactive instructions and control
WO2021168838A1 (zh) 位置信息确定方法、设备及存储介质
CN108702444B (zh) 一种图像处理方法、无人机及***
WO2018214078A1 (zh) 拍摄控制方法及装置
WO2018209702A1 (zh) 无人机的控制方法、无人机以及机器可读存储介质
WO2020103108A1 (zh) 一种语义生成方法、设备、飞行器及存储介质
WO2020014987A1 (zh) 移动机器人的控制方法、装置、设备及存储介质
CN108475442A (zh) 用于无人机航拍的增强现实方法、处理器及无人机
KR102122755B1 (ko) 화면 터치 방식을 이용하는 짐벌 제어 방법
WO2021027676A1 (zh) 视觉定位方法、终端和服务器
WO2020048365A1 (zh) 飞行器的飞行控制方法、装置、终端设备及飞行控制***
US11238665B2 (en) Multi-modality localization of users
WO2022082440A1 (zh) 确定目标跟随策略的方法、装置、***、设备及存储介质
CN108416044B (zh) 场景缩略图的生成方法、装置、电子设备及存储介质
WO2021138856A1 (zh) 相机控制方法、设备及计算机可读存储介质
WO2022188151A1 (zh) 影像拍摄方法、控制装置、可移动平台和计算机存储介质
WO2022021028A1 (zh) 目标检测方法、装置、无人机及计算机可读存储介质
US20240037759A1 (en) Target tracking method, device, movable platform and computer-readable storage medium
WO2023082066A1 (zh) 作业规划方法、控制装置、控制终端及存储介质
WO2024087024A1 (zh) 信息处理方法、信息处理设备、飞行器***及存储介质
WO2022094808A1 (zh) 拍摄控制方法、装置、无人机、设备及可读存储介质
WO2022061615A1 (zh) 待跟随目标的确定方法、装置、***、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20934268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20934268

Country of ref document: EP

Kind code of ref document: A1