CN117671525A - Target detection method, device and equipment for unmanned aerial vehicle identification cable - Google Patents

Target detection method, device and equipment for unmanned aerial vehicle identification cable Download PDF

Info

Publication number
CN117671525A
CN117671525A CN202311512653.3A CN202311512653A CN117671525A CN 117671525 A CN117671525 A CN 117671525A CN 202311512653 A CN202311512653 A CN 202311512653A CN 117671525 A CN117671525 A CN 117671525A
Authority
CN
China
Prior art keywords
point cloud
cable
dimensional
cloud data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311512653.3A
Other languages
Chinese (zh)
Inventor
梁学修
巩潇
王磊
安晖
余娴
蒋杰
黄欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Software Evaluation Center
Original Assignee
China Software Evaluation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Software Evaluation Center filed Critical China Software Evaluation Center
Priority to CN202311512653.3A priority Critical patent/CN117671525A/en
Publication of CN117671525A publication Critical patent/CN117671525A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a target detection method, device and equipment for unmanned aerial vehicle identification cables. The target detection method for the unmanned aerial vehicle identification cable comprises the following steps: acquiring point cloud data of surrounding environment, and acquiring three-dimensional linear characteristic point cloud data for representing a cable to be identified based on the point cloud data; mapping the three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data; and identifying whether the three-dimensional linear characteristic point cloud represents a true solid cable or not based on the two-dimensional point cloud image data. Compared with the traditional two-dimensional image cable identification method, the cable identification method is used for carrying out cable identification by acquiring three-dimensional point cloud data, the problem that the acquisition work of an image is influenced due to factors such as machine body shake and air flow in the flight process of an unmanned aerial vehicle, the traditional vision sensor is easily influenced by illumination and severe weather in a field environment, the acquired two-dimensional point cloud image data precision is seriously reduced, and the error of the detection and identification result of a subsequent cable is large is effectively avoided.

Description

Target detection method, device and equipment for unmanned aerial vehicle identification cable
Technical Field
The application relates to the technical field of cable identification, in particular to a target detection method, device and equipment for unmanned aerial vehicle cable identification.
Background
In recent years, along with the continuous maturity of unmanned aerial vehicle technology, unmanned aerial vehicle is also gradually applied in fields such as electric power inspection, agricultural management, aviation survey and drawing, disaster relief, and the like, and unmanned aerial vehicle is in the environment of aerial work, and the unmanned aerial vehicle can not meet various obstacles, wherein with low-altitude cables such as cable, wire and the like most common, low-altitude cables are as a tiny fine obstacle, and the safety of unmanned aerial vehicle is a great threat. The unmanned aerial vehicle cannot avoid the obstacle in time due to the fact that a low-altitude cable cannot be accurately identified, and the ground explosion event is frequently caused.
At present, a cable target detection method based on a visual sensor is often adopted, mainly, two-dimensional image data of a cable are collected through a visual camera, and a cable target detection result is obtained through a two-dimensional image target detection algorithm, so that an unmanned aerial vehicle can identify the cable. The method has the following defects: although the current related two-dimensional image recognition algorithm is mature, the unmanned aerial vehicle can cause image acquisition work to be influenced due to factors such as fuselage shake and air flow in the flight process, and a traditional vision sensor is easily influenced by illumination and bad weather in a field environment, so that the accuracy of acquired two-dimensional point cloud image data is seriously reduced, and the error of a subsequent cable detection recognition result is larger.
Therefore, how to accurately identify the empty cable, and further improve the safety and obstacle avoidance capability of the unmanned aerial vehicle becomes a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the application provides a target detection method, a device and equipment for unmanned aerial vehicle identification cables.
According to an aspect of the present application, there is provided a target detection method for an unmanned aerial vehicle identification cable, including:
acquiring point cloud data of surrounding environment, and acquiring three-dimensional linear characteristic point cloud data for representing a cable to be identified based on the point cloud data;
mapping the three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data;
and identifying whether the three-dimensional linear characteristic point cloud represents a true solid cable or not based on the two-dimensional point cloud image data.
In one possible implementation manner, after acquiring the point cloud data of the surrounding environment, the method further includes:
dividing the point cloud data according to areas;
clustering and dividing the divided point cloud data to obtain the point cloud data with the cable area to be identified;
filtering the point cloud data with the cable area to be identified to obtain point cloud data to be identified;
and obtaining the three-dimensional linear characteristic point cloud data based on the point cloud data to be identified.
In one possible implementation manner, when the three-dimensional linear characteristic point cloud data for representing the cable to be identified is obtained based on the point cloud data, linear fitting processing is performed on the point cloud data, and the three-dimensional linear characteristic cloud data is obtained.
In one possible implementation manner, when the point cloud data is subjected to linear fitting processing, a RANSAC algorithm is adopted to iterate the point cloud data for a preset number of times, so as to obtain the three-dimensional linear characteristic point cloud data.
In one possible implementation manner, when identifying whether the three-dimensional straight-line characteristic point cloud characterizes a true solid cable based on the two-dimensional point cloud image data, the method includes:
carrying out graying treatment on the two-dimensional point cloud image data to obtain a gray level image;
acquiring a point cloud straight line in the gray scale map;
and judging whether the corresponding three-dimensional straight line characteristic point clouds represent a true solid cable or not based on each point cloud straight line.
In one possible implementation manner, when each point cloud straight line in the gray scale map is acquired, each point cloud straight line in the gray scale map is extracted by using a hough transform algorithm.
In one possible implementation manner, when determining whether the corresponding three-dimensional straight line characteristic point cloud represents a true solid cable based on each point cloud straight line, the method includes:
classifying each point cloud straight line, and dividing each point cloud straight line into a real cable point cloud and a pseudo cable point cloud;
and classifying the three-dimensional straight line characteristic point clouds corresponding to the point cloud straight lines of the real cable point clouds, and representing the real cable.
According to another aspect of the present application, there is provided an object detection device for an unmanned aerial vehicle identification cable, comprising: the device comprises an input module, a processing module and an identification module;
the input module is configured to acquire point cloud data of surrounding environment, and obtain three-dimensional linear characteristic point cloud data used for representing the cable to be identified based on the point cloud data;
the processing module is configured to map the three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data;
the identification module is configured to identify whether the three-dimensional straight line characteristic point cloud characterizes a real solid cable based on the two-dimensional point cloud image data.
According to another aspect of the present application, there is provided an object detection apparatus for an unmanned aerial vehicle identification cable, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of the above when executing the executable instructions.
The method and the device are suitable for identifying whether cables exist nearby through point cloud data of surrounding environments of the unmanned aerial vehicle acquired by the airborne laser radar in the flight process of the unmanned aerial vehicle. And obtaining three-dimensional linear characteristic point cloud data for representing the cable to be identified based on the point cloud data obtained by the airborne laser radar, and judging whether the three-dimensional linear characteristic data represents a true solid cable or not based on the two-dimensional point cloud image data when the three-dimensional linear characteristic point cloud data is mapped into the two-dimensional point cloud image data. Compared with the traditional method for carrying out cable identification on the two-dimensional image acquired through the airborne camera, the method and the device for carrying out cable identification on the two-dimensional point cloud image effectively avoid the problems that the unmanned aerial vehicle can cause image acquisition work to be influenced due to factors such as body shake and air flow in the flight process, and the traditional vision sensor is easily influenced by illumination and bad weather in the field environment, so that the accuracy of the acquired two-dimensional point cloud image data is seriously reduced, and the error of the detection and identification result of the subsequent cable is caused to be larger.
Other features and aspects of the present application will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
Fig. 1 shows a flowchart of a target detection method for a drone identification cable according to an embodiment of the present application;
fig. 2 shows a main body structure diagram of an object detection device for a unmanned aerial vehicle identification cable according to an embodiment of the present application;
fig. 3 shows a main body structure diagram of an object detection apparatus for a unmanned aerial vehicle identification cable according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
For the convenience of understanding the technical solutions of the present application, corresponding explanation is first made on terms in the present application.
Fig. 1 shows a flowchart of a target detection method for a drone identification cable according to an embodiment of the present application. As shown in fig. 1, the target detection method for the unmanned aerial vehicle identification cable includes: step S100: acquiring point cloud data of surrounding environment, and acquiring three-dimensional linear characteristic point cloud data for representing a cable to be identified based on the point cloud data; step S200: mapping the three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data; step S300: and identifying whether the three-dimensional linear characteristic point cloud represents a true solid cable or not based on the two-dimensional point cloud image data.
The method and the device are suitable for identifying whether cables exist nearby through point cloud data of surrounding environments of the unmanned aerial vehicle acquired by the airborne laser radar in the flight process of the unmanned aerial vehicle. And obtaining three-dimensional linear characteristic point cloud data for representing the cable to be identified based on the point cloud data obtained by the airborne laser radar, and judging whether the three-dimensional linear characteristic data represents a true solid cable or not based on the two-dimensional point cloud image data when the three-dimensional linear characteristic point cloud data is mapped into the two-dimensional point cloud image data. Compared with the traditional method for carrying out cable identification on the two-dimensional image acquired through the airborne camera, the method and the device for carrying out cable identification on the two-dimensional point cloud image effectively avoid the problems that the unmanned aerial vehicle can cause image acquisition work to be influenced due to factors such as body shake and air flow in the flight process, and the traditional vision sensor is easily influenced by illumination and bad weather in the field environment, so that the accuracy of the acquired two-dimensional point cloud image data is seriously reduced, and the error of the detection and identification result of the subsequent cable is caused to be larger.
Further, compared with a method for identifying a cable by straight line fitting through three-dimensional point cloud data, the method and the device have the advantages that three-dimensional point cloud data are converted into two-dimensional point cloud image data and then are judged, so that the interference problem caused by the fact that the laser radar can acquire noise point clouds when scanning the point clouds of a target object due to the characteristics of the point cloud data, such as disorder, high density and irregularity, is effectively avoided.
After acquiring the point cloud data of the surrounding environment, the method further comprises the following steps: dividing the point cloud data according to the areas; clustering and dividing the divided point cloud data to obtain point cloud data with a cable area to be identified; filtering the point cloud data with the cable area to be identified to obtain the point cloud data to be identified; and obtaining three-dimensional linear characteristic point cloud data based on the point cloud data to be identified.
Further, when the point cloud data is divided into areas, the division is performed according to the rotation angle of the laser radar. Surrounding point cloud environment data information is continuously collected and output through a multi-beam laser radar, wherein the rotation angle of the laser radar is set to be horizontal rotation angle-90 degrees and vertical rotation angle-25 degrees-15 degrees, the multi-beam laser radar is divided according to all point cloud data acquired by the laser radar in the rotation range, more than two pieces of block point cloud data are obtained, each piece of block point cloud data is acquired through rotation of the laser radar by a preset angle in the rotation range, and the preset angle is-45 degrees in the horizontal direction and-25 degrees-15 degrees in the vertical direction. And then sequentially carrying out subsequent processing on the partitioned point cloud data.
And carrying out clustering segmentation processing based on the acquired partitioned point cloud data. Specifically, based on the characteristic of uneven distribution of regional point cloud density, the pedicle point cloud clusters in the partitioned point cloud data are partitioned by a DBSCAN clustering algorithm, so that the partition of the dense point cloud clusters and the sparse point cloud clusters is completed, wherein the dense point cloud clusters are regarded as the point cloud clusters with cable regions, and the sparse point cloud clusters are regarded as noise points.
And filtering the point cloud clusters with dense density, namely the point cloud data with the cable area to be identified, to obtain the point cloud data to be identified.
When the filtering process is performed, the method comprises the following steps: removing outliers from the point cloud data with the cable area to be identified by using a radius filtering algorithm; and then reducing the point cloud density by using a uniform sampling algorithm.
Specifically, selecting point cloud data with a cable area to be identified as a point cloud area R to be filtered, randomly selecting point clouds in the area R to be filtered as circle centers, and removing the point clouds from the area R to be filtered when the number m of the point clouds in the circle with the radius R in the area R to be filtered is considered as an outlier when m is less than n. (n is the number of custom point clouds).
Wherein, according to actual application scene and requirement, the skilled person can adjust the radius r, the number m of point clouds and the specific number n of custom point clouds to realize experiment and adjustment for different data sets and tasks, find the optimal parameter configuration
Further, selecting the point cloud data with the cable area to be identified after the outlier is removed as a down-sampling data sample, and performing a down-sampling processing operation: converting the point cloud data in the downsampled data sample into two-dimensional point cloud data, discretizing the converted two-dimensional point cloud data according to a preset grid size, namely, putting the converted two-dimensional point cloud data into a preset grid, wherein the size of each grid in the preset grid is w; b. for each cell, its center point coordinates are calculated. The coordinates of its center point can be calculated by the following formula:
x_center=i*w+(w/2)
y_center=j*w+(w/2)
wherein i and j represent the ith row and j column of the grid; the x_center and y_center represent the central point coordinate values of the grid.
And setting sampling points according to a preset sampling rate. The preset sampling rate is s, and for a center point (x_center, y_center) of a certain grid, if the point meets the following conditions, the point is added to the sampling set: if (random (0, 1) < s), add the point (x_center, y_center) to the sample set.
And reconstructing the points in all the sampling sets according to the original three-dimensional coordinates to obtain down-sampled point clouds, namely point cloud data to be identified. Wherein, a person skilled in the art can set the size w of each grid in the preset grid and the preset sampling rate s according to the actual application scene and the requirement
By using any one of the methods, the point cloud data to be identified is obtained, and the three-dimensional linear characteristic point cloud data is obtained based on the point cloud data to be identified.
In one possible implementation manner, when three-dimensional linear characteristic point cloud data for representing a cable to be identified is obtained based on the point cloud data, linear fitting processing is performed on the point cloud data, and the three-dimensional linear characteristic cloud data is obtained.
Further, when the point cloud data is subjected to linear fitting processing, the point cloud data is subjected to iteration for preset times by adopting a RANSAC algorithm, and three-dimensional linear characteristic point cloud data are obtained.
Still further, the method comprises: step S010: randomly sampling U points in the point cloud data to be identified as an initial sampling point set (U=2), and marking the sampling points. And determining a space linear equation between the U points by calculating intersecting straight lines between point cloud planes in the point cloud space by utilizing coordinates of the U points.
Step S020: the plane equation set of the point cloud space is:
wherein A is 1 、A 2 、B 1 、B 2 、C 1 、C 2 、D 1 、D 2 Are constants, and these coefficients have the following meanings in the plane equation: a and B, C represent three on a planeA non-zero vector. Which together define a plane. A. B, C are referred to as the x, y, z components of the normal vector of the plane, respectively. The normal vector is a vector perpendicular to a plane, and is generally used to represent the direction of the plane. D represents the intercept of the plane. When D is>0, the plane is positioned above the point cloud space; when D is<When 0, the plane is positioned below the point cloud space; when d=0, the plane is tangential to the z-axis of the point cloud space. In point cloud processing, a set of plane equations is used to describe the planar structure in point cloud data. By solving the equation set, a plane tangential to the given point cloud data can be found, so that the point cloud can be segmented, identified and the like.
Step S030: and (3) obtaining an intersecting linear equation about the parameter t by simultaneous calculation of the plane equation set:
(A 1 A 2 -B 1 B 2 )x+(A 1 B 2 -A 2 B 1 )y+(A 1 C 2 -C 1 B 2 )z+(D 2 C 1 -D 1 C 2 )t=0
wherein the parameter t is obtained by cross-product of coefficient vectors of the original plane linear equation set, and the parameter t changes in value according to the change of the coefficient vectors. Is a parameter, and can take any real number. It represents the position of a point on a straight line, which can be moved in a straight line by varying the value. When t=0, one point (x_0, y_0, z_0) on the straight line is obtained, and when t takes other values, other points on the straight line are obtained.
Step S040: by utilizing the linear equation, the space vector of the intersecting straight line is obtained, and then the relation between the space vector and the coordinates of the U points is calculated, and the space linear equation of the U points is determined:
wherein the space vectorDirection vector of straight line L 0 (x 0 ,y 0 ,z 0 )、L 1 (x 1 ,y 1 ,z 1 ) Respectively, the sampling points.
Step S050: and calculating the distances from the rest points except the U sampling points in the point cloud data to be identified to the linear fitting model, namely the distances from the rest points except the U sampling points to the space linear equation of the U points. The distance from the point to the straight line is equal to the distance from the point to the vertical point on the straight line, and the coordinate values of the other points are substituted into the space straight line equation of the U points, so that the vertical point coordinate of the U points on the straight line fitting model can be obtained. For example, the original point coordinate is L (x, y, z), the perpendicular point coordinate is L ' (x ', y ', z ') after substituting the linear equation, and then the position distance between L and L ' is calculated according to the euclidean distance formula, and the calculation formula is as follows:
step S060: comparing the position distance with a preset distance threshold Q, if the position distance |LL '| is less than or equal to Q, indicating that the point supports the straight line fitting model, and if the position distance |LL' | is>Q, then the point is not supported by the straight line fitting model, when traversing is finished to divide L 0 And L 1 After all points outside, the number of all supported model points is recorded.
Step S070: repeating the steps S010 to S060 in the point cloud data to be identified until the iteration times are preset, and selecting U sampling points corresponding to the straight line fitting model with the highest supporting point number as three-dimensional straight line characteristic point cloud data according to the number of the point clouds of the specific iteration times.
By the method, three-dimensional linear characteristic point cloud data for representing the cable to be identified are obtained, the three-dimensional linear characteristic point cloud data are mapped into two-dimensional point cloud image data, and whether the three-dimensional linear characteristic point cloud represents a true solid cable is identified based on the two-dimensional point cloud image data.
In other words, the three-dimensional linear characteristic point cloud data are mapped into the two-dimensional coordinate system under the front view angle of the current unmanned aerial vehicle, and corresponding two-dimensional point cloud image data are generated, so that the subsequent linear re-detection operation of the point cloud data under the two-dimensional coordinate system is facilitated.
Further, when mapping three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data, introducing a homogeneous coordinate system to replace an original European coordinate system in a 3D space, wherein the mapping formula is as follows:
and further obtaining two-dimensional point cloud image data. Wherein x and y are projection coordinates of two-dimensional point cloud image data on a camera image; x, Y and Z are three-dimensional coordinates of three-dimensional straight line feature point cloud data in a camera coordinate system; fx and fy are focal lengths of the camera, representing how many pixels per unit length corresponds to the interior of the camera on the camera plane; cx and cy are the optical centers of the camera and represent the coordinates of the center pixel of the image plane.
The method comprises the following steps: (1) fx: the focal length of the camera represents how many pixels per unit length corresponds to the interior of the camera on the camera plane. The larger the focal length, the smaller the camera field angle, and the more distant objects in the image will appear. (2) fy: similar to the focal length, for pixel mapping in the vertical direction. (3) cx: the optical center of the camera is at the lateral coordinates of the center pixel of the image plane. The optical center is the point at which the camera line of sight intersects the imaging plane, through which point the line of sight is perpendicular to the imaging plane. (4) cy: similar to cx, the optical center of the camera is at the longitudinal coordinates of the center pixel of the image plane. The setting of these parameters is related to the internal parameters of the camera, which can be obtained by means of camera calibration, etc. With these parameters, the point cloud of the lidar can be mapped onto the image plane of the camera by projection.
In one possible implementation, when identifying whether the three-dimensional straight line feature point cloud characterizes the real solid cable based on the two-dimensional point cloud image data, the method includes: carrying out graying treatment on the two-dimensional point cloud image data to obtain a gray level image; acquiring a point cloud straight line in a gray level diagram; and judging whether the corresponding three-dimensional straight line characteristic point cloud represents a true solid cable or not based on the point cloud straight line.
When the two-dimensional point cloud image data is subjected to gray processing, the point cloud image is converted into a gray image through a cvtColor function in an Opencv visual library, and the calculated amount of image detection can be effectively reduced through the gray processing, so that the detection efficiency is improved.
Further, when the point cloud straight line in the gray level image is obtained, the point cloud straight line in the gray level image is extracted by using a Hough transformation algorithm.
Specifically, the edge pixel point information obtained after the processing is finished is mapped to a Hough space; voting each edge pixel point mapped to the Hough space; and accumulating part of suspected points in the plane in the range, wherein a set of points exceeding a preset threshold represents a straight line, so that each point cloud straight line in the gray level diagram is detected.
And detecting the edge pixel point information of the gray level image by adopting a canny edge detection algorithm. The method comprises the following specific steps: step S010: gaussian filtering: the image is smoothed using a gaussian filter to reduce the effects of noise. Step S020: calculating the gradient: the gradient of the image is calculated using the Sobel operator to determine the intensity and direction of the edge. Step S030: non-maximum suppression: in the gradient direction, each pixel is compared, leaving a local maximum to obtain a finer edge. Step S040: double threshold detection: the upper and lower thresholds are used to detect edges, setting the gray value to 0 or 255. If the gradient value of the pixel point is greater than the upper threshold value, the pixel point is considered to be a strong edge; if the gradient value of the pixel point is smaller than the lower threshold value, the pixel point is considered to be a non-edge; if the gradient value of the pixel is between the two thresholds, it is considered a weak edge. Step S050: the weak edges are joined to the strong edges to form complete edges.
When voting on each edge pixel point mapped to the hough space, the method comprises the following steps: step S001: creating a Hough space: and creating a two-dimensional array as a Hough space according to the size of the image and a preset linear parameter range. The number of rows of the array represents one range of straight line parameters and the number of columns represents another range. Step S002: voting is carried out on each edge pixel point: for each edge pixel point, calculating possible straight line parameters of the edge pixel point in the Hough space, and voting corresponding positions. The specific voting mode is to increment the count value of the corresponding position by one. Step S003: traversing Hough space: traversing the whole Hough space, and finding the position where the count value exceeds the preset threshold value. These positions correspond to the detected straight line parameters. Step S004: extracting a straight line: and extracting a corresponding straight line from the original image according to the detected straight line parameter. The principle of the voting mechanism is to accumulate the positions of the straight line parameters of the edge pixel points in the Hough space by voting the edge pixel points. Thus, a higher count value represents a straight line passing through more edge pixels in the image. By setting the threshold, it is possible to determine which straight lines are considered to be valid straight lines.
When detecting the straight line of each point cloud in the gray scale according to the preset threshold value, the method comprises the following steps: according to ifH (θ, r) > a preset threshold, the position is considered to represent the detected straight line. H (θ, r) is an accumulator of hough space, where θ and r are parameters of a straight line. θ (theta): the angle representing a straight line is typically in the range of-90, or 0, 180. r: representing the distance of the line from the origin, the extent of which depends on the size of the image. r may be negative, indicating that the straight line is in the opposite direction of the origin. The value of H (θ, r) represents the number of straight lines passing through a certain pixel point in the image and having a certain angle θ and distance r. In the hough transform, each pixel in the image is traversed and all lines that each pixel may form are voted, i.e., the value of H (θ, r) is increased. Finally, the value of H (θ, r) exceeds a certain threshold value (θ, r) corresponding to the detected point cloud straight line.
After obtaining each point cloud straight line by using any one of the methods, judging each point cloud straight line, namely classifying each point cloud straight line by using a K-means algorithm, and dividing each point cloud straight line into a real cable point cloud and a pseudo cable point cloud; and classifying the three-dimensional linear characteristic point clouds corresponding to the point cloud lines of the real cable point clouds to represent the real cable.
Specifically, when classifying the cloud straight lines of each point by using the K-means algorithm, the method comprises the following steps: step S001: k-means algorithm is introduced to set K value as 2, K represents the selected number of random points and the number of point cloud categories, namely, the extracted multiple point cloud straight lines are divided into two categories: true cable point clouds and pseudocable point clouds.
Step S0002: and clustering the points in the extracted point cloud straight line, wherein the points with relatively average and near distances among the points are clustered, namely, the real cable cluster, and the clusters with far and uneven distances among the points are the pseudo cable cluster.
Step S0003: randomly initializing U points as cluster centroids, calculating the distance between each point and the centroid, and distributing the distances to clusters corresponding to the closest centroid.
Step S0004: the centroids of the clusters are updated, and the centroid of each cluster is updated as the average value of the distances between all points of the cluster.
Step S0005: and repeating the steps S003 to S004 until the mass center of the cluster is not changed, ending the iteration, and extracting the true cable cluster. The number of repeated steps and the mass center of the cluster are not changed any more or whether the preset maximum iteration number is reached (the iteration number is set according to the situation, such as the number of point clouds)
Wherein in step S0004, the centroid of each cluster needs to be updated in each iteration. The centroid is the average of the coordinates of all points in the cluster. Specifically, for each cluster, the average of the coordinates of all points in the cluster is calculated, and the result is set as a new centroid. This can be achieved by the following formula: centroid=sum (points_in_cluster)/len (points_in_cluster). centroid represents the centroid value of a cluster (cluster). points_in_cluster refers to all points in a cluster. The points are typically vectors in a multidimensional space, e.g. in a two-dimensional space, each point may be a vector containing x and y coordinates. sum (points_in_cluster) means that coordinates of all points in a cluster are added separately. For example, if we have three two-dimensional points (1, 2), (3, 4), (5, 6), then their sum is (1+3+5, 2+4+6) = (9, 12). len (points_in_cluster) refers to the number of points in a cluster. The result of this formula centroid=sum (points_in_cluster)/len (points_in_cluster) is the centroid of this cluster. It is the average of the coordinates of all points in the cluster. In the above example, the centroid is (9/3, 12/3) = (3, 4). The coordinates of this centroid may represent the center position of this cluster for determining in the next iteration whether other points should be assigned to this cluster.
Still further, referring to fig. 2, according to another aspect of the present application, there is also provided an object detection apparatus 100 for an unmanned aerial vehicle identification cable, including: an input module 110, a processing module 120, and an identification module 130; the input module 110 is configured to acquire point cloud data of the surrounding environment, and obtain three-dimensional linear characteristic point cloud data for representing the cable to be identified based on the point cloud data; a processing module 120 configured to map three-dimensional rectilinear feature point cloud data into two-dimensional point cloud image data; the identifying module 130 is configured to identify whether the three-dimensional straight line feature point cloud characterizes the real cable based on the two-dimensional point cloud image data.
Still further, according to another aspect of the present application, there is also provided a cable identification device 200. Referring to fig. 3, the cable identification device 200 of the present embodiment includes a processor 210 and a memory 220 for storing instructions executable by the processor 210. Wherein the processor 210 is configured to implement any of the cable identification methods described above when executing the executable instructions.
Here, it should be noted that the number of processors 210 may be one or more. Meanwhile, in the external reference calibration apparatus 200 for an image according to the embodiment of the present application, an input device 230 and an output device 240 may be further included. The processor 210, the memory 220, the input device 230, and the output device 240 may be connected by a bus, or may be connected by other means, which is not specifically limited herein.
The memory 220 is a computer-readable storage medium that can be used to store software programs, computer-executable programs, and various modules, such as: program or module corresponding to the cable identification method in the embodiment of the application. The processor 210 performs various functional applications and data processing of the cable identification device 200 by running software programs or modules stored in the memory 220.
The input device 230 may be used to receive an input digital or signal. Wherein the signal may be a key signal generated in connection with user settings of the device/terminal/server and function control. The output means 240 may comprise a display device such as a display screen.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1.A target detection method for an unmanned aerial vehicle identification cable, comprising:
acquiring point cloud data of surrounding environment, and acquiring three-dimensional linear characteristic point cloud data for representing a cable to be identified based on the point cloud data;
mapping the three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data;
and identifying whether the three-dimensional linear characteristic point cloud represents a true solid cable or not based on the two-dimensional point cloud image data.
2. The method of claim 1, further comprising, after acquiring the point cloud data of the surrounding environment:
dividing the point cloud data according to areas;
clustering and dividing the divided point cloud data to obtain the point cloud data with the cable area to be identified;
filtering the point cloud data with the cable area to be identified to obtain point cloud data to be identified;
and obtaining the three-dimensional linear characteristic point cloud data based on the point cloud data to be identified.
3. The method according to claim 1, wherein when the three-dimensional linear characteristic point cloud data for representing the cable to be identified is obtained based on the point cloud data, linear fitting processing is performed on the point cloud data to obtain the three-dimensional linear characteristic cloud data.
4. The method of claim 3, wherein when the point cloud data is subjected to linear fitting processing, a RANSAC algorithm is adopted to iterate the point cloud data for a preset number of times, so as to obtain the three-dimensional linear characteristic point cloud data.
5. The method of any of claims 1 to 4, when identifying whether the three-dimensional straight line feature point cloud characterizes a true cable based on the two-dimensional point cloud image data, comprising:
carrying out graying treatment on the two-dimensional point cloud image data to obtain a gray level image;
acquiring a point cloud straight line in the gray scale map;
and judging whether the corresponding three-dimensional straight line characteristic point clouds represent a true solid cable or not based on each point cloud straight line.
6. The method of claim 5, wherein each of the point cloud lines in the gray scale map is extracted using a hough transform algorithm when each of the point cloud lines in the gray scale map is acquired.
7. The method of claim 5, wherein determining whether the corresponding three-dimensional straight-line feature point cloud characterizes a true cable based on each of the point cloud straight lines comprises:
classifying each point cloud straight line, and dividing each point cloud straight line into a real cable point cloud and a pseudo cable point cloud;
and classifying the three-dimensional straight line characteristic point clouds corresponding to the point cloud straight lines of the real cable point clouds, and representing the real cable.
8. An object detection device for an unmanned aerial vehicle identification cable, comprising: the device comprises an input module, a processing module and an identification module;
the input module is configured to acquire point cloud data of surrounding environment, and obtain three-dimensional linear characteristic point cloud data used for representing the cable to be identified based on the point cloud data;
the processing module is configured to map the three-dimensional linear characteristic point cloud data into two-dimensional point cloud image data;
the identification module is configured to identify whether the three-dimensional straight line characteristic point cloud characterizes a real solid cable based on the two-dimensional point cloud image data.
9. An object detection device for an unmanned aerial vehicle identification cable, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 7 when executing the executable instructions.
CN202311512653.3A 2023-11-14 2023-11-14 Target detection method, device and equipment for unmanned aerial vehicle identification cable Pending CN117671525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311512653.3A CN117671525A (en) 2023-11-14 2023-11-14 Target detection method, device and equipment for unmanned aerial vehicle identification cable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311512653.3A CN117671525A (en) 2023-11-14 2023-11-14 Target detection method, device and equipment for unmanned aerial vehicle identification cable

Publications (1)

Publication Number Publication Date
CN117671525A true CN117671525A (en) 2024-03-08

Family

ID=90081676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311512653.3A Pending CN117671525A (en) 2023-11-14 2023-11-14 Target detection method, device and equipment for unmanned aerial vehicle identification cable

Country Status (1)

Country Link
CN (1) CN117671525A (en)

Similar Documents

Publication Publication Date Title
CN107063228B (en) Target attitude calculation method based on binocular vision
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
CN113592957B (en) Multi-laser radar and multi-camera combined calibration method and system
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
CN111723721A (en) Three-dimensional target detection method, system and device based on RGB-D
CN111860695A (en) Data fusion and target detection method, device and equipment
US20190065824A1 (en) Spatial data analysis
CN110070567B (en) Ground laser point cloud registration method
CN115049700A (en) Target detection method and device
CN114067001B (en) Vehicle-mounted camera angle calibration method, terminal and storage medium
US20210125361A1 (en) Systems and methods for stereoscopic imaging
CN113077476A (en) Height measurement method, terminal device and computer storage medium
CN111553946A (en) Method and device for removing ground point cloud and obstacle detection method and device
CN113589263B (en) Method and system for jointly calibrating multiple homologous sensors
CN114764885A (en) Obstacle detection method and device, computer-readable storage medium and processor
CN107843261B (en) Method and system for positioning robot position based on laser scanning data
CN116704307A (en) Target detection method and system based on fusion of image virtual point cloud and laser point cloud
CN111582013A (en) Ship retrieval method and device based on gray level co-occurrence matrix characteristics
CN117671525A (en) Target detection method, device and equipment for unmanned aerial vehicle identification cable
CN114924260A (en) Multi-laser radar point cloud registration method
CN112365600B (en) Three-dimensional object detection method
CN113048950B (en) Base station antenna inclination angle measuring method and device, storage medium and computer equipment
CN115100271A (en) Method and device for detecting goods taking height, computer equipment and storage medium
CN112508970A (en) Point cloud data segmentation method and device
CN117372988B (en) Road boundary detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination