CN115147738B - Positioning method, device, equipment and storage medium - Google Patents

Positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN115147738B
CN115147738B CN202210724255.7A CN202210724255A CN115147738B CN 115147738 B CN115147738 B CN 115147738B CN 202210724255 A CN202210724255 A CN 202210724255A CN 115147738 B CN115147738 B CN 115147738B
Authority
CN
China
Prior art keywords
obstacle
central point
image
positioning
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210724255.7A
Other languages
Chinese (zh)
Other versions
CN115147738A (en
Inventor
冯琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PEOPLE'S PUBLIC SECURITY UNIVERSITY OF CHINA
Original Assignee
PEOPLE'S PUBLIC SECURITY UNIVERSITY OF CHINA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PEOPLE'S PUBLIC SECURITY UNIVERSITY OF CHINA filed Critical PEOPLE'S PUBLIC SECURITY UNIVERSITY OF CHINA
Priority to CN202210724255.7A priority Critical patent/CN115147738B/en
Publication of CN115147738A publication Critical patent/CN115147738A/en
Application granted granted Critical
Publication of CN115147738B publication Critical patent/CN115147738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of remote sensing positioning, in particular to a positioning method, a positioning device, positioning equipment and a storage medium, and discloses a method for acquiring an image of a barrier when the information of the barrier is detected on a flight path of a target unmanned aerial vehicle; determining a central point included angle according to the obstacle image and a preset central point coordinate; positioning the obstacle image through a preset positioning model based on the included angle of the central point to obtain an obstacle marking frame; the invention determines the position information of a target obstacle based on the obstacle marking frame, and obtains an accurate obstacle marking frame by acquiring an obstacle image on a flight line, determining a central point included angle between the obstacle image and a preset central point coordinate according to the obstacle image, and further positioning the obstacle through a preset positioning model, thereby realizing an accurate obstacle positioning function and avoiding the technical problems that the positioning of the obstacle in the vertical direction has larger error and the positioning result is not accurate in the prior art.

Description

Positioning method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of remote sensing positioning, in particular to a positioning method, a positioning device, positioning equipment and a storage medium.
Background
With the development of science and technology, more and more fields are used in unmanned aerial vehicle technology, for example: logistics, cruise and equipment detection etc. in unmanned aerial vehicle uses, the barrier function is kept away in unmanned aerial vehicle's location of especially important unmanned aerial vehicle, and the satellite positioning that traditional unmanned aerial vehicle locate function generally adopted combines image localization to confirm the position of barrier, and then keep away the barrier, but traditional positioning means when the barrier that faces on the vertical direction shelters from, can't accomplish accurate vertical detection, has great error.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a positioning method, a positioning device, positioning equipment and a storage medium, and aims to solve the technical problems that in the prior art, the positioning of obstacles in the vertical direction has large errors, and the positioning result is not accurate.
In order to achieve the above object, the present invention provides a positioning method, comprising the steps of:
when the obstacle information is detected on the flight path of the target unmanned aerial vehicle, acquiring an obstacle image;
determining a central point included angle according to the obstacle image and a preset central point coordinate;
positioning the obstacle image through a preset positioning model based on the included angle of the central point to obtain an obstacle marking frame;
and determining the position information of the target obstacle based on the obstacle marking frame.
Optionally, determining a central point included angle according to the obstacle image and a preset central point coordinate includes:
preprocessing the obstacle image;
extracting an initial obstacle image and a next frame obstacle image in the preprocessed obstacle image;
acquiring a first central point coordinate in the initial obstacle image and a second central point coordinate in the next obstacle image;
and determining a central point included angle according to the first central point coordinate, the second central point coordinate and a preset central point coordinate.
Optionally, the obstacle labeling box includes: a transverse barrier marking frame and a longitudinal barrier marking frame;
based on the central point contained angle will obstacle image carries out the obstacle location through predetermineeing the location model, obtains obstacle mark frame, includes:
acquiring a transverse obstacle marking frame corresponding to the obstacle image through a preset transverse positioning model based on the central point included angle;
and acquiring a longitudinal obstacle marking frame corresponding to the obstacle image through a preset longitudinal positioning model based on the central point included angle.
Optionally, obtaining a longitudinal obstacle labeling frame corresponding to the obstacle image through a preset longitudinal positioning model based on the central point included angle includes:
acquiring an initial longitudinal obstacle marking frame corresponding to the initial obstacle image and the next frame of obstacle image through a preset longitudinal positioning model based on the central point included angle;
and performing linear optimization on the initial longitudinal obstacle marking frame to obtain a longitudinal obstacle marking frame.
Optionally, the performing linear optimization on the initial longitudinal obstacle labeling box includes:
translating and/or rotating the initial longitudinal obstacle marking frame;
acquiring initial coordinate information of an initial longitudinal barrier marking frame;
acquiring a translation distance and a rotation angle, wherein the rotation angle comprises: a first rotation angle, a second rotation angle, and a third rotation angle, the first rotation angle, the second rotation angle, and the third rotation angle respectively corresponding to a three-dimensional angle;
determining longitudinal obstacle marking frame coordinate information based on the initial coordinate information, the translation distance, the first rotation angle, the second rotation angle and the third rotation angle;
and determining a longitudinal obstacle marking frame according to the coordinate information of the longitudinal obstacle marking frame.
Optionally, the preprocessing the obstacle image includes:
carrying out gray level processing on the obstacle image to obtain a gray level processed obstacle image;
and carrying out binarization processing on the barrier image after the gray processing according to a preset binarization threshold value.
Optionally, after determining the position information of the target obstacle based on the obstacle labeling box, the method further includes:
extracting height information and horizontal position information in the obstacle position information;
and updating the flight route according to the height information, the horizontal position information and a preset weight.
In addition, in order to achieve the above object, the present invention further provides a positioning device, including:
the image acquisition module is used for acquiring an obstacle image when the obstacle information is detected on the flight path of the target unmanned aerial vehicle;
the included angle determining module is used for determining a central point included angle according to the obstacle image and a preset central point coordinate;
the obstacle marking module is used for positioning the obstacle image through a preset positioning model based on the central point included angle to obtain an obstacle marking frame;
and the position determining module is used for determining the position information of the target obstacle based on the obstacle marking frame.
In addition, to achieve the above object, the present invention further provides a positioning apparatus, including: a memory, a processor and a positioning program stored on the memory and executable on the processor, the positioning program being configured to implement the steps of the positioning method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium, which stores a positioning program, and the positioning program implements the steps of the positioning method as described above when executed by a processor.
The invention provides a positioning method, which comprises the following steps: when the obstacle information is detected on the flight path of the target unmanned aerial vehicle, acquiring an obstacle image; determining a central point included angle according to the obstacle image and a preset central point coordinate; positioning the obstacle image through a preset positioning model based on the included angle of the central point to obtain an obstacle marking frame; the method comprises the steps of acquiring an obstacle image on a flight line, determining a central point included angle between the obstacle image and a preset central point coordinate according to the obstacle image, and further positioning an obstacle through a preset positioning model to obtain an accurate obstacle marking frame, so that an accurate obstacle positioning function is realized, the technical problems that the obstacle positioning in the vertical direction has large errors and the positioning result is not accurate in the prior art are solved, and the positioning accuracy is improved.
Drawings
FIG. 1 is a schematic diagram of a positioning apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a positioning method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a positioning method according to a second embodiment of the present invention;
FIG. 4 is a diagram illustrating a linear optimization result of an obstacle labeling box according to an embodiment of the positioning method of the present invention;
FIG. 5 is a flowchart illustrating a positioning method according to a third embodiment of the present invention;
FIG. 6 is a block diagram of a positioning device according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a positioning apparatus in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the positioning apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the positioning apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a positioning program.
In the positioning apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the positioning apparatus of the present invention may be disposed in the positioning apparatus, and the positioning apparatus invokes the positioning program stored in the memory 1005 through the processor 1001 and executes the positioning method provided by the embodiment of the present invention.
An embodiment of the present invention provides a positioning method, and referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a positioning method according to the present invention.
In this embodiment, the positioning method includes the following steps:
step S10: and when the obstacle information is detected on the flight path of the target unmanned aerial vehicle, acquiring an obstacle image.
It should be noted that the main body of the method of this embodiment may be a device having data processing, data acquisition, and data transmission functions, for example: a device controller, a server, or the like, which is not particularly limited in this embodiment, and in this embodiment and the following embodiments, an unmanned aerial vehicle controller will be taken as an example for description.
In addition, the method of the embodiment may also be applied to an unmanned device, for example: a logistics robot or a line patrol robot, etc., which is not limited in this embodiment.
It should be noted that the flight route may be a route between path anchor points prestored by the unmanned aerial vehicle, and may also be a route path prestored, which is not specifically limited in this embodiment.
It should be understood that, detecting the obstacle information refers to an obstacle that blocks the flight of the target unmanned aerial vehicle and is detected by an image acquisition device, a radar, a sonar, or other devices on the flight route of the target unmanned aerial vehicle, and this embodiment does not specifically limit this.
The obtained image of the obstacle may be an image collected by an image collecting device, where the image collecting device may be a video camera, a scanner, or other image collecting devices with the same or similar functions, and this embodiment does not specifically limit this.
Step S20: and determining a central point included angle according to the obstacle image and a preset central point coordinate.
It can be understood that the preset central point coordinate refers to a central point of the obstacle, wherein the coordinate system of the unmanned aerial vehicle needs to be converted into a world coordinate system in order to obtain the preset central point coordinate.
In the specific implementation, the triangulation is established by collecting two obstacle images and combining the preset central point coordinates, the central point of the obstacle is connected with the central point of the first obstacle image and the central point of the second obstacle image respectively, and an included angle exists between the three images and is recorded as a central point included angle.
Step S30: and carrying out obstacle positioning on the obstacle image through a preset positioning model based on the central point included angle to obtain an obstacle marking frame.
It should be noted that the preset positioning model is used for positioning the obstacle in the acquired obstacle image, and performing feature labeling to determine the position information of the actual obstacle.
In concrete implementation, because the height of target unmanned aerial vehicle is inconsistent with the height of barrier, consequently, traditional barrier location can have certain skew when facing the ascending barrier location of vertical direction, leads to the space height of barrier and actual space height to be inconsistent, leads to unmanned aerial vehicle at the flight in-process, has the risk of touching the crash.
The preset positioning model in this embodiment may be a positioning model based on a nonlinear optimization algorithm, for example: a least square algorithm, a conjugate gradient algorithm, a topkit-veintott algorithm, etc., which is not specifically limited in this embodiment.
Step S40: and determining the position information of the target obstacle based on the obstacle marking frame.
It should be noted that after the obstacle labeling frame is determined, the position information of the target obstacle can be calculated based on the acquisition time interval of the adjacent images, the amount of change in the movement angle of the three-dimensional space coordinate system, the size of the obstacle labeling frame, and the like.
The present embodiment provides a positioning method, where the positioning method includes: when the obstacle information is detected on the flight path of the target unmanned aerial vehicle, acquiring an obstacle image; determining a central point included angle according to the obstacle image and a preset central point coordinate; positioning the obstacle image through a preset positioning model based on the included angle of the central point to obtain an obstacle labeling frame; based on target obstacle positional information is confirmed to obstacle mark frame, and this embodiment is through gathering the obstacle image on the flight route, and according to obstacle image confirms the obstacle image and predetermines the central point contained angle between the central point coordinate, and then carries out the obstacle location through predetermineeing the location model, obtains accurate obstacle mark frame, realizes accurate obstacle locate function, has avoided having improved the location precision in the prior art to the obstacle location in the vertical direction and has had great error, the not accurate technical problem of location result.
Referring to fig. 3, fig. 3 is a flowchart illustrating a positioning method according to a second embodiment of the present invention.
Based on the first embodiment, in this embodiment, the step S20 includes:
step S201: and preprocessing the obstacle image.
It should be noted that the process of preprocessing the obstacle may be gray scale processing, binarization processing, and the like to improve the resolution and definition of the image of the obstacle, or may be an image processing method with the same or similar functions, which is not limited in this embodiment.
Further, the step S201 includes:
carrying out gray processing on the obstacle image to obtain a gray-processed obstacle image;
and carrying out binarization processing on the barrier image after the gray processing according to a preset binarization threshold value.
It should be noted that, before performing the gray processing on the obstacle image, the RGB information of the obstacle image may be collected, and the formula for obtaining the RGB value of the current display interface is as follows:
T=T(R 1 (x,y),G 1 (x,y),B 1 (x,y));x∈m,y∈n
the method comprises the following steps of obtaining an image of the target unmanned aerial vehicle, wherein T is an RGB value set of an image of the obstacle, x and y are icon pixel coordinate values, R1 (x, y), G1 (x, y) and B1 (x, y) correspond to the coordinate RGB pixel values, and m and n are resolution ratios of the image of the obstacle, wherein the image of the obstacle is collected by the image collecting device of the target unmanned aerial vehicle.
It is to be understood that the gray processing for the obstacle image may adopt a threshold method, that is, a gray processing threshold value is divided to perform gray processing for each RGB pixel value, wherein the gray processing threshold value may be 0.3, 0.5 and 0.2, and this embodiment is not particularly limited thereto.
The formula for obtaining the RGB pixel values after the gray processing is:
R 2 (x,y)=G 2 (x,y)=B 2 (x,y)=0.3R 1 (x,y)+0.6G 1 (x,y)+0.1B(x,y)
wherein R1 (x, y), G1 (x, y), and B1 (x, y) are pixel values of the pre-processing coordinates (x, y); r2 (x, y), G2 (x, y), and B2 (x, y) are pixel values of the processed coordinates (x, y).
The preset binarization threshold may be a maximum value of a ratio of inter-class variances of two partial images obtained by image-dividing the obstacle image, so as to obtain an optimal image definition.
Step S202: and extracting an initial obstacle image and a next frame obstacle image in the preprocessed obstacle image.
It is worth explaining that the position between the initial obstacle image and the next frame obstacle image is an image between any two adjacent frames, so that the position of the obstacle acquired by the target unmanned aerial vehicle cannot have large deviation, and the positioning error is reduced.
Step S203: and acquiring a first central point coordinate in the initial obstacle image and a second central point coordinate in the next frame of obstacle image.
It is understood that the first center point coordinates refer to image center point coordinates of the initial obstacle image; the second center point coordinate is the image center point coordinate of the next frame of obstacle image.
Step S204: and determining a central point included angle according to the first central point coordinate, the second central point coordinate and a preset central point coordinate.
It is easy to understand that the preset central point coordinate refers to the central point of the obstacle, wherein, in order to obtain the preset central point coordinate, the coordinate system of the unmanned aerial vehicle also needs to be converted into a world coordinate system so as to obtain the preset central point coordinate.
In the specific implementation, triangularization is established by collecting two obstacle images and combining preset central point coordinates, the central point of an obstacle is connected with the central point of a first obstacle image and the central point of a second obstacle image, and an included angle exists between the first obstacle image and the second obstacle image and is recorded as a central point included angle.
In this embodiment, the step S30 includes:
step S301: and acquiring a transverse obstacle marking frame corresponding to the obstacle image through a preset transverse positioning model based on the central point included angle.
Step S302: and acquiring a longitudinal obstacle marking frame corresponding to the obstacle image through a preset longitudinal positioning model based on the central point included angle.
It should be noted that, because the height of the target unmanned aerial vehicle is inconsistent with the height of the obstacle, when the conventional obstacle is positioned facing the obstacle in the vertical direction, a certain offset exists, which causes the height of the obstacle in space to be inconsistent with the height of the actual space, and causes the unmanned aerial vehicle to have a risk of touch crash in the flight process, so that the embodiment optimizes the labeling frame for longitudinal positioning with less error.
Further, the step S302 includes:
acquiring an initial longitudinal obstacle marking frame corresponding to the initial obstacle image and the next frame of obstacle image through a preset longitudinal positioning model based on the central point included angle;
and performing linear optimization on the initial longitudinal obstacle marking frame to obtain a longitudinal obstacle marking frame.
It should be noted that after the central point included angle is obtained, the initial obstacle image obtained by the preset longitudinal positioning model and the longitudinal obstacle labeling box corresponding to the obstacle image of the next frame may be mapped to the image plane by this process, where the preset longitudinal positioning model may be a positioning model based on the hungarian algorithm, or may be other positioning models having the same or similar functions, and this embodiment is not particularly limited thereto.
The mapping formula is as follows:
Figure BDA0003712741880000091
wherein, X is the abscissa of the initial obstacle image, Y is the ordinate of the initial obstacle image, Z is the vertical coordinate of the initial obstacle image, P is a preset 3 × 4 matrix, u is the lateral offset of the initial obstacle image and the next obstacle image frame in the horizontal direction, and v is the longitudinal offset of the initial obstacle image and the next obstacle image frame in the horizontal direction.
In this regard, the performing linear optimization on the initial longitudinal obstacle labeling box includes:
translating and/or rotating the initial longitudinal obstacle marking frame;
acquiring initial coordinate information of an initial longitudinal barrier marking frame;
acquiring a translation distance and a rotation angle, wherein the rotation angle comprises: a first rotation angle, a second rotation angle, and a third rotation angle, the first rotation angle, the second rotation angle, and the third rotation angle respectively corresponding to a three-dimensional angle;
determining longitudinal obstacle marking frame coordinate information based on the initial coordinate information, the translation distance, the first rotation angle, the second rotation angle and the third rotation angle;
and determining a longitudinal obstacle marking frame according to the coordinate information of the longitudinal obstacle marking frame.
In a specific implementation, the formulas for obtaining the horizontal offset between the initial obstacle image and the next obstacle image frame in the horizontal direction and the longitudinal offset between the initial obstacle image and the next obstacle image frame in the horizontal direction are respectively as follows:
Figure BDA0003712741880000092
Figure BDA0003712741880000093
wherein fx is an image acquisition device focal length when an initial obstacle image is shot, fy is an image acquisition device focal length when a next frame of obstacle image is shot, X ' Y ' Z ' are three-dimensional coordinates of a marking frame after linear optimization respectively, and cx and cy are coordinates of an image vertical point respectively.
The formula for obtaining the three-dimensional coordinate information of the marking frame after linear optimization is as follows:
X`=R*X+T
wherein, X' refers to a coordinate matrix of the labeling frame after translation and/or rotation, R refers to a rotation matrix after rotation by a preset angle, T refers to a time interval, and X refers to an initial coordinate matrix of the labeling frame.
In specific implementation, referring to fig. 4, through linear optimization of the barrier marking frame, an accurate barrier marking frame can be obtained, so that subsequent course adjustment is performed, and the condition that the unmanned aerial vehicle is damaged is avoided.
In the embodiment, a transverse obstacle marking frame corresponding to the obstacle image is obtained through a preset transverse positioning model based on the included angle of the central point; based on the central point contained angle is through predetermineeing vertically to the location model and acquireing the vertical barrier mark frame that barrier image corresponds, this embodiment is through carrying out linear optimization to vertical barrier mark frame, and then reduces the error when marking the frame location to obtain accurate barrier mark frame and barrier positional information, keep away the barrier in order to follow-up target unmanned aerial vehicle.
Referring to fig. 5, fig. 5 is a flowchart illustrating a positioning method according to a third embodiment of the present invention.
Based on the second embodiment, in this embodiment, after the step S40, the method further includes:
step S50: and extracting height information and horizontal position information in the obstacle position information.
The height information refers to the height information of the obstacle on the flight path relative to the ground; the horizontal position information refers to horizontal position information of the obstacle on a world coordinate system.
Step S60: and updating the flight route according to the height information, the horizontal position information and a preset weight.
It can be understood that the preset weight refers to an influence weight of the height information and the horizontal position information of the target unmanned aerial vehicle on the flight path, and in this embodiment, the preset weight may be 0.3.
In specific implementation, if the flight path of the target unmanned aerial vehicle needs to be adjusted, the relative height needs to be determined according to the current height of the target unmanned aerial vehicle and the height information of the obstacle, the deviation of the relative horizontal position is determined according to the current horizontal position of the target unmanned aerial vehicle and the horizontal position of the obstacle, and then the flight path modification in the horizontal direction and the vertical direction is realized by combining preset weights.
The embodiment discloses extracting height information and horizontal position information in the obstacle position information; according to the height information, the horizontal position information and the preset weight, the flight path is updated, and according to the embodiment, the flight path of the target unmanned aerial vehicle is adjusted through the horizontal position information, the height information and the preset weight ratio of the obstacle, so that the obstacle is prevented from damaging the target unmanned aerial vehicle.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium stores a positioning program, and the positioning program, when executed by a processor, implements the steps of the positioning method described above.
Since the storage medium adopts all technical solutions of all the above embodiments, at least all the beneficial effects brought by the technical solutions of the above embodiments are achieved, and details are not repeated herein.
Referring to fig. 6, fig. 6 is a block diagram of a positioning device according to a first embodiment of the present invention.
As shown in fig. 6, the positioning apparatus according to the embodiment of the present invention includes:
the image acquisition module 10 is configured to acquire an image of an obstacle when the obstacle information is detected on a flight path of the target unmanned aerial vehicle.
And an included angle determining module 20, configured to determine a central point included angle according to the obstacle image and a preset central point coordinate.
And the obstacle marking module 30 is used for positioning the obstacle through a preset positioning model based on the central point included angle to obtain an obstacle marking frame.
And the position determining module 40 is used for determining the position information of the target obstacle based on the obstacle marking frame.
The present embodiment provides a positioning method, where the positioning method includes: when obstacle information is detected on a flight path of a target unmanned aerial vehicle, acquiring an obstacle image; determining a central point included angle according to the obstacle image and a preset central point coordinate; positioning the obstacle image through a preset positioning model based on the included angle of the central point to obtain an obstacle marking frame; based on target obstacle positional information is confirmed to obstacle mark frame, and this embodiment is through gathering the obstacle image on the flight route, and according to obstacle image confirms the obstacle image and predetermines the central point contained angle between the central point coordinate, and then carries out the obstacle location through predetermineeing the location model, obtains accurate obstacle mark frame, realizes accurate obstacle locate function, has avoided having improved the location precision in the prior art to the obstacle location in the vertical direction and has had great error, the not accurate technical problem of location result.
In an embodiment, the included angle determining module 20 is further configured to pre-process the obstacle image; extracting an initial obstacle image and a next frame obstacle image in the preprocessed obstacle image; acquiring a first central point coordinate in the initial obstacle image and a second central point coordinate in the next frame of obstacle image; and determining a central point included angle according to the first central point coordinate, the second central point coordinate and a preset central point coordinate.
In an embodiment, the obstacle labeling module 30 is further configured to obtain a transverse obstacle labeling frame corresponding to the obstacle image through a preset transverse positioning model based on the central point included angle; and acquiring a longitudinal obstacle marking frame corresponding to the obstacle image through a preset longitudinal positioning model based on the central point included angle.
In an embodiment, the obstacle labeling module 30 is further configured to obtain an initial longitudinal obstacle labeling frame corresponding to the initial obstacle image and the obstacle image of the next frame through a preset longitudinal positioning model based on the central point included angle; and performing linear optimization on the initial longitudinal barrier marking frame to obtain a longitudinal barrier marking frame.
In an embodiment, the obstacle labeling module 30 is further configured to translate and/or rotate the initial longitudinal obstacle labeling frame; acquiring initial coordinate information of an initial longitudinal barrier marking frame; acquiring a translation distance and a rotation angle, wherein the rotation angle comprises: a first rotation angle, a second rotation angle, and a third rotation angle, the first rotation angle, the second rotation angle, and the third rotation angle respectively corresponding to a three-dimensional angle; determining longitudinal obstacle marking frame coordinate information based on the initial coordinate information, the translation distance, the first rotation angle, the second rotation angle and the third rotation angle; and determining a longitudinal obstacle marking frame according to the coordinate information of the longitudinal obstacle marking frame.
In an embodiment, the included angle determining module 20 is further configured to perform gray processing on the obstacle image to obtain a gray-processed obstacle image; and carrying out binarization processing on the obstacle image subjected to the gray processing according to a preset binarization threshold value.
In an embodiment, the position determining module 40 is further configured to extract height information and horizontal position information in the obstacle position information; and updating the flight route according to the height information, the horizontal position information and a preset weight.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
It should be noted that the above-mentioned work flows are only illustrative and do not limit the scope of the present invention, and in practical applications, those skilled in the art may select some or all of them according to actual needs to implement the purpose of the solution of the present embodiment, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may refer to the positioning method provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. A positioning method, characterized in that the positioning method comprises:
when obstacle information is detected on a flight path of a target unmanned aerial vehicle, acquiring an obstacle image;
determining a central point included angle according to the obstacle image and a preset central point coordinate;
positioning the obstacle image through a preset positioning model based on the included angle of the central point to obtain an obstacle marking frame;
determining target obstacle position information based on the obstacle marking frame;
wherein, according to obstacle image and default central point coordinate confirm the central point contained angle, include:
preprocessing the obstacle image;
extracting an initial obstacle image and a next frame obstacle image in the preprocessed obstacle image;
acquiring a first central point coordinate in the initial obstacle image and a second central point coordinate in the next frame of obstacle image;
determining a central point included angle according to the first central point coordinate, the second central point coordinate and a preset central point coordinate;
the obstacle labeling box includes: a transverse barrier marking frame and a longitudinal barrier marking frame;
based on the central point contained angle will obstacle image carries out the obstacle location through predetermineeing the location model, obtains obstacle mark frame, includes:
acquiring a transverse obstacle marking frame corresponding to the obstacle image through a preset transverse positioning model based on the central point included angle;
acquiring a longitudinal obstacle marking frame corresponding to the obstacle image through a preset longitudinal positioning model based on the central point included angle;
based on central point contained angle obtains through predetermineeing vertical location model the corresponding vertical barrier mark frame of barrier image includes:
acquiring an initial longitudinal obstacle marking frame corresponding to the initial obstacle image and the next frame of obstacle image through a preset longitudinal positioning model based on the central point included angle;
and performing linear optimization on the initial longitudinal barrier marking frame to obtain a longitudinal barrier marking frame.
2. The method of claim 1, wherein said linearly optimizing said initial longitudinal obstruction marking box comprises:
translating and/or rotating the initial longitudinal obstacle marking frame;
acquiring initial coordinate information of an initial longitudinal barrier marking frame;
acquiring a translation distance and a rotation angle, wherein the rotation angle comprises: a first rotation angle, a second rotation angle, and a third rotation angle, the first rotation angle, the second rotation angle, and the third rotation angle respectively corresponding to a three-dimensional angle;
determining longitudinal obstacle marking frame coordinate information based on the initial coordinate information, the translation distance, the first rotation angle, the second rotation angle and the third rotation angle;
and determining a longitudinal obstacle marking frame according to the coordinate information of the longitudinal obstacle marking frame.
3. The method of locating according to claim 1, wherein said pre-processing the obstacle image comprises:
carrying out gray processing on the obstacle image to obtain a gray-processed obstacle image;
and carrying out binarization processing on the obstacle image subjected to the gray processing according to a preset binarization threshold value.
4. The positioning method according to any one of claims 1 to 3, wherein after determining the target obstacle position information based on the obstacle labeling box, further comprising:
extracting height information and horizontal position information in the obstacle position information;
and updating the flight route according to the height information, the horizontal position information and a preset weight.
5. A positioning device, comprising:
the image acquisition module is used for acquiring an obstacle image when the obstacle information is detected on the flight path of the target unmanned aerial vehicle;
the included angle determining module is used for determining a central point included angle according to the obstacle image and a preset central point coordinate;
the obstacle marking module is used for positioning the obstacle image through a preset positioning model based on the central point included angle to obtain an obstacle marking frame;
the position determining module is used for determining the position information of the target obstacle based on the obstacle marking frame;
wherein, according to obstacle image and predetermined central point coordinate confirms central point contained angle, include:
preprocessing the obstacle image;
extracting an initial obstacle image and a next frame obstacle image in the preprocessed obstacle image;
acquiring a first central point coordinate in the initial obstacle image and a second central point coordinate in the next frame of obstacle image;
determining a central point included angle according to the first central point coordinate, the second central point coordinate and a preset central point coordinate;
the obstacle labeling box includes: a transverse barrier marking frame and a longitudinal barrier marking frame;
based on the central point contained angle will obstacle image carries out the obstacle location through predetermineeing the location model, obtains obstacle mark frame, includes:
acquiring a transverse obstacle marking frame corresponding to the obstacle image through a preset transverse positioning model based on the central point included angle;
acquiring a longitudinal obstacle marking frame corresponding to the obstacle image through a preset longitudinal positioning model based on the central point included angle;
based on central point contained angle obtains through predetermineeing vertical location model the corresponding vertical barrier mark frame of barrier image includes:
acquiring an initial longitudinal obstacle marking frame corresponding to the initial obstacle image and the next frame of obstacle image through a preset longitudinal positioning model based on the central point included angle;
and performing linear optimization on the initial longitudinal obstacle marking frame to obtain a longitudinal obstacle marking frame.
6. A positioning apparatus, characterized in that the positioning apparatus comprises: a memory, a processor and a positioning program stored on the memory and executable on the processor, the positioning program being configured to implement the positioning method of any one of claims 1 to 4.
7. A storage medium, characterized in that the storage medium has a positioning program stored thereon, which when executed by a processor implements the positioning method according to any one of claims 1 to 4.
CN202210724255.7A 2022-06-24 2022-06-24 Positioning method, device, equipment and storage medium Active CN115147738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210724255.7A CN115147738B (en) 2022-06-24 2022-06-24 Positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210724255.7A CN115147738B (en) 2022-06-24 2022-06-24 Positioning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115147738A CN115147738A (en) 2022-10-04
CN115147738B true CN115147738B (en) 2023-01-13

Family

ID=83407826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210724255.7A Active CN115147738B (en) 2022-06-24 2022-06-24 Positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115147738B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063594A (en) * 2018-07-13 2018-12-21 吉林大学 Remote sensing images fast target detection method based on YOLOv2
CN113870347A (en) * 2020-06-30 2021-12-31 北京市商汤科技开发有限公司 Target vehicle control method and device, electronic equipment and storage medium
CN112068553A (en) * 2020-08-20 2020-12-11 上海姜歌机器人有限公司 Robot obstacle avoidance processing method and device and robot
CN114240992A (en) * 2021-12-20 2022-03-25 北京安捷智合科技有限公司 Method and system for labeling target object in frame sequence
CN114419520B (en) * 2022-03-28 2022-07-05 南京智谱科技有限公司 Training method, device, equipment and storage medium of video-level target detection model

Also Published As

Publication number Publication date
CN115147738A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN110893617B (en) Obstacle detection method and device and storage device
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
CN110927708B (en) Calibration method, device and equipment of intelligent road side unit
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
CN110869974A (en) Point cloud processing method, point cloud processing device and storage medium
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
EP3968266B1 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
JP2018124787A (en) Information processing device, data managing device, data managing system, method, and program
KR101995223B1 (en) System, module and method for detecting pedestrian, computer program
US11783507B2 (en) Camera calibration apparatus and operating method
CN112766008B (en) Object space pose acquisition method based on two-dimensional code
CN113587934B (en) Robot, indoor positioning method and device and readable storage medium
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
JP2017120551A (en) Autonomous traveling device
CN112232275A (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN114494466A (en) External parameter calibration method, device and equipment and storage medium
CN111179413B (en) Three-dimensional reconstruction method, device, terminal equipment and readable storage medium
CN115147738B (en) Positioning method, device, equipment and storage medium
CN117152265A (en) Traffic image calibration method and device based on region extraction
CN114488178A (en) Positioning method and device
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
CN114092771A (en) Multi-sensing data fusion method, target detection device and computer equipment
CN114080626A (en) Method for determining the position of a first image region in a corresponding image, SoC and control device and system for carrying out the method, and computer program
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant