CN116229119A - Substation inspection robot and repositioning method thereof - Google Patents

Substation inspection robot and repositioning method thereof Download PDF

Info

Publication number
CN116229119A
CN116229119A CN202211049224.2A CN202211049224A CN116229119A CN 116229119 A CN116229119 A CN 116229119A CN 202211049224 A CN202211049224 A CN 202211049224A CN 116229119 A CN116229119 A CN 116229119A
Authority
CN
China
Prior art keywords
point cloud
scan context
matched
inspection robot
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211049224.2A
Other languages
Chinese (zh)
Inventor
赵彬
李唯萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhikan Shenjian Beijing Technology Co ltd
Original Assignee
Zhikan Shenjian Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhikan Shenjian Beijing Technology Co ltd filed Critical Zhikan Shenjian Beijing Technology Co ltd
Priority to CN202211049224.2A priority Critical patent/CN116229119A/en
Publication of CN116229119A publication Critical patent/CN116229119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of robots, in particular to a substation inspection robot and a repositioning method thereof. According to the repositioning method of the substation inspection robot, a database is obtained by conducting Scan context algorithm processing on key frames when a substation is constructed; then, performing Scan context algorithm processing on the current point cloud, and performing rough matching on the current point cloud and the database to obtain an initial pose; and finally, inputting the obtained initial pose into an NDT algorithm for accurate matching to obtain a target pose, and realizing that a repositioning (Scan Context) algorithm is added into the NDT positioning matching algorithm, so that the robot can realize rapid initial positioning at any position of a transformer substation, and when positioning information is lost due to noise and disturbance signals in the positioning process, the automatic, accurate and global repositioning can be rapidly realized.

Description

Substation inspection robot and repositioning method thereof
Technical Field
The invention relates to the technical field of robots, in particular to a substation inspection robot and a repositioning method thereof.
Background
The robot positioning means that the robot is in a high-precision map, the robot is matched with the current map through radar data or odometer information to obtain the accurate position of the robot in the map, so that a worker can conveniently obtain the running position of the robot in time, and the robot can safely and stably work. However, in practical applications, most robots need to manually set the initial position of the robot in the map, which has the consequence that when the robot fails to match or is restarted after power failure, the robot fails to perform global positioning, and the normal operation of the robot is affected.
The robot repositioning technology can overcome the problem, and after the robot loses the global positioning, the robot can be quickly re-matched through repositioning, so that the accurate position of the robot in a map is obtained, the problem that the robot cannot be recovered due to the loss of the global positioning is solved, and the robot repositioning technology has great significance for the stable and safe operation of the robot.
A method for quickly repositioning a point cloud map in a dynamic environment. The two-dimensional grid map and the single point cloud map are used at the earliest, but the environment with complex road conditions cannot be dealt with because of the rareness of plane data and the higher similarity; after the three-dimensional point cloud is applied, the information richness is improved, and the possibility is provided for positioning in a more complex environment. And simultaneously, storing point cloud images of objects in the scene by using a word bag model repositioning method, comparing the point cloud images with similar point cloud images in a database when the surrounding environment of the objects is obtained, and selecting the matching point cloud with the highest matching degree so as to reposition the initial pose. Liu Guannan et al combine Q-learning with neural network for time data and website data of robot as input, utilize epsilon-greedy strategy to return action value, this method advantage is that compared with static mobilization, can let the vehicle under the current state mobilize more reasonable, bigger satisfying the expected value, but the shortcoming is that greedy strategy's calculation mode is global from local consideration, there is certain defect to global control probably still. The Monte Carlo algorithm based on particle filtering is applied to the super-equal, and positioning is completed by gradually iterating the sampled particles and the laser radar information, wherein the method has the defects that the global particles are random under natural conditions, and the obtained result can not be converged in a short time or can not be converged by mistake; the effect of coarse positioning is enhanced by arranging two-dimensional code particle sampling (enhanced positioning) in the environment, arranging multi-camera monitoring and extracting ORB feature matching (improved efficiency), analyzing the sensing area and the movement distance of a sensor (determining the area of a grid map), but the improvement of accurate positioning is limited.
Therefore, after the target detection method based on the convolutional neural network and the particle filtering algorithm are combined, the effectiveness of semantic information is improved, and the repositioning accuracy and the rapid convergence can be quickened. However, the traditional point cloud matching algorithm directly performs 3d-3d point cloud matching, but the 3d point cloud has huge data volume, and when the map is relatively large, a relatively large memory and hard disk can be occupied, and meanwhile, the processing speed is not fast enough, so that the method is not applicable to scenes with high-speed movement.
Therefore, it is urgently required to provide a repositioning algorithm based on point cloud matching, so that when the global positioning of the robot is lost, the algorithm is applied to realize accurate global repositioning of the robot through matching of the point cloud top view of the current frame and all frames.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art and provides a substation inspection robot and a repositioning method thereof.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a repositioning method for a substation inspection robot comprises the following steps:
s1, conducting Scan context algorithm processing on key frames when a transformer substation is built, and obtaining a database;
s2, performing Scan context algorithm processing on the current point cloud, and performing rough matching on the current point cloud and the database to obtain an initial pose;
s3, inputting the obtained initial pose into an NDT algorithm for accurate matching to obtain the target pose.
Further, the initial pose includes one or more of a rotation amount θ, a position of the history point cloud.
Further, the Scan Context algorithm processing comprises the steps of partitioning point cloud, generating Scan Context, matching based on the Scan Context and calculating relative pose.
Further, the segmentation point cloud comprises
Dividing the point cloud space into N along the radius increasing direction r The width of each ring is as follows:
Figure SMS_1
wherein L is max Is the furthest distance from the entire area;
cutting each ring into N s The laser spot set after division can be re-expressed as:
Figure SMS_2
wherein ρ is ij Representing the set of points in the partition unit of the jth sector of the ith ring.
Further, L max In the range of 5-15.
Further, generating the Scan Context includes mapping the segmented point cloud to N r ×N s The matrix of (a) gets scan context, each row of the matrix represents a circle, each column of the matrix represents a sector, and the value of each element in the matrix represents the segmentation unit ρ ij Height maximum of all three-dimensional points in the (c).
Further, the Scan Context based matching includes
The distance function of the two frames Scan Context is defined as follows:
Figure SMS_3
wherein I is cur Scan Context, I for the current Point cloud his For the Scan Context of the history point cloud,
Figure SMS_4
for the j-th column in the current point cloud Scan Context,/column is in the current point cloud Scan Context>
Figure SMS_5
The j-th column of the history point cloud Scan Context;
the history point cloud Scan ContextI to be matched his Translating according to the columns to obtain N s The Scan Context is sequentially connected with the current point cloud Scan Context I cur Calculating the distance, and selecting the value with the smallest distance as a result, wherein the value is expressed as follows:
Figure SMS_6
further, calculating the relative pose includes
After the Scan Context of the current point cloud is matched with the Scan Context of the history point cloud, the position of the history point cloud is obtained, and D (I) cur ,I his ) Column translation vector n at time *
Figure SMS_7
/>
And according to the column translation vector n * Obtaining a rotation angle θ expressed by:
Figure SMS_8
and taking the rough matching pose formed by the rotation quantity theta and the position of the history point cloud as the initial pose of NDT matching, so as to obtain the accurate matching relative pose.
Further, the NDT algorithm includes
The point cloud data to be matched and the point cloud data of the current frame are divided into a plurality of cubic grids in an average mode, and each cubic grid carries a certain number of point clouds according to the following formula:
Figure SMS_9
Figure SMS_10
wherein X and Y are point sets obtained by dividing the point cloud to be matched and the current point cloud into a cubic grid;
according to point cloud data in each point set in the point cloud to be matched, calculating a point set mean value and covariance of the point cloud to be matched, wherein the point set mean value and covariance are represented by the following formula:
Figure SMS_11
Figure SMS_12
mu is the mean value of each point set in the point cloud to be matched, and Σ is the covariance of each point set in the point cloud to be matched;
after the mean value and covariance of each point set of the point clouds to be matched are obtained, the current point cloud rotates and translates according to the predicted pose given by people to obtain the predicted point cloud, and at the moment, the joint probability of the predicted point cloud point set can be calculated under the same coordinate system with the matched point cloud, wherein the joint probability is calculated according to the following formula:
y′ i =T(p,y i )=Ry i +t
Figure SMS_13
Figure SMS_14
in which y' i Predicting a predicted point cloud obtained by rotating R and predicting displacement t for a first frame point cloud given by people;
f(X,y′ i ) The joint probability of each point set in X for the predicted point cloud;
psi is the joint probability of all point sets of the predicted point cloud;
when the predicted pose T (R, T) is found so that the value of psi is maximum, the NDT point cloud is successfully matched, and accurate laser odometer information can be obtained;
therefore, the parameters to be solved for the NDT algorithm are T (R, T), and the objective function is as follows:
Figure SMS_15
the substation inspection robot operates by adopting the substation inspection robot repositioning method.
The invention has the beneficial effects that: as can be seen from the description of the invention, compared with the prior art, the repositioning method of the substation inspection robot obtains a database by conducting Scan context algorithm processing on key frames when a substation is constructed; then, performing Scan context algorithm processing on the current point cloud, and performing rough matching on the current point cloud and the database to obtain an initial pose; and finally, inputting the obtained initial pose into an NDT algorithm for accurate matching to obtain a target pose, and realizing that a repositioning (Scan Context) algorithm is added into the NDT positioning matching algorithm, so that the robot can realize rapid initial positioning at any position of a transformer substation, and when positioning information is lost due to noise and disturbance signals in the positioning process, the automatic, accurate and global repositioning can be rapidly realized.
Drawings
FIG. 1 is a top view of a point cloud map in accordance with a preferred embodiment of the present invention;
FIG. 2 is a diagram of Scan Context generation in accordance with a preferred embodiment of the present invention;
FIG. 3 is a graph of the rviz observations without a repositioning algorithm in a preferred embodiment of the present invention;
FIG. 4 is a graph of the rviz observations with a repositioning algorithm in accordance with a preferred embodiment of the present invention;
fig. 5 is a graph comparing a registration positioning laser odometer with an actual path in a preferred embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "connected," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The preferred embodiment of the invention provides a substation inspection robot which is operated by adopting the substation inspection robot repositioning method.
The robot is provided with various sensors such as a laser radar, a camera, a gyroscope and the like, collected data are transmitted back to an industrial personal computer for unified processing, the industrial personal computer is connected with a GNSS and the radar and is mainly used for realizing map building and positioning, and the industrial personal computer is connected with a camera to identify environmental information. The data processing module mainly acquires the data of the IMU and the encoder, and realizes the measurement of the accumulated walking distance of the robot and the measurement of the accumulated rotation angle of the robot. And then uploading the data to an industrial personal computer for processing.
The repositioning method of the substation inspection robot comprises the following steps:
s1, conducting Scan context algorithm processing on key frames when a transformer substation is built, and obtaining a database;
s2, performing Scan context algorithm processing on the current point cloud, and performing rough matching on the current point cloud and the database to obtain an initial pose;
s3, inputting the obtained initial pose into an NDT algorithm for accurate matching to obtain the target pose.
As a preferred embodiment of the invention, it may also have the following additional technical features: the initial pose includes one or more of a rotation amount θ, a position of the history point cloud.
In this embodiment, the Scan Context algorithm processing includes partitioning the point cloud, generating Scan Context, matching based on Scan Context, and computing relative pose, where
1) Partitioning point cloud
The three-dimensional point cloud image is converted into a view of a high-altitude overlook view angle as shown in fig. 1, at this time, the visible point cloud in the coordinate system is the highest point cloud based on the ground, and the vertical shape of the surrounding structure can be summarized under the condition that the highest point cloud does not need heavy calculation, so that the point cloud characteristics are analyzed.
The ground is divided into a plurality of areas by taking the center position of the point cloud as a starting point, and the steps are as follows:
dividing the point cloud space into N along the radius increasing direction r The width of each ring is as follows:
Figure SMS_16
wherein L is max Is the furthest distance from the entire area; the L is max In the range of 5-15.
Cutting each ring into N s The laser spot set after division can be re-expressed as:
Figure SMS_17
wherein ρ is ij Representing the set of points in the partition unit of the jth sector of the ith ring.
2) Generating Scan Context
Corresponding the segmented point cloud to N r ×N s As shown in fig. 2, each row of the matrix represents a circle, each column of the matrix represents a sector, and the value of each element in the matrix represents the segmentation unit ρ ij Height maximum of all three-dimensional points in the (c).
3) Scan Context based matching
The distance function of the two frames Scan Context is defined as follows:
Figure SMS_18
wherein I is cur Scan Context, I for the current Point cloud his For the Scan Context of the history point cloud,
Figure SMS_19
for the j-th column in the current point cloud Scan Context,/column is in the current point cloud Scan Context>
Figure SMS_20
The j-th column of the history point cloud Scan Context;
the more similar the column vectors corresponding to Scan Context are, the more similar the two frame point clouds are; when the distance function d (I cur ,I his ) When the point cloud to be matched is smaller than the threshold value, the point cloud to be matched is considered;
because the robot may be in different directions to pass through when the robot is in the same scene, the sequence of column vectors in the Scan Context will be changed at this time, and thus the distance function is larger and is omitted; the history point cloud Scan Context I to be matched his Translating according to the columns to obtain N s Scan Context, in turn, is associated with the current point cloud Scan Context I cur Calculating the distance, and selecting the value with the smallest distance as a result, wherein the value is expressed as follows:
Figure SMS_21
4) Computing relative poses
After the Scan Context of the current point cloud is matched with the Scan Context of the history point cloud, the position of the history point cloud is obtained, and D (I) cur ,I his ) Column translation vector n at time *
Figure SMS_22
/>
And according to the column translation vector n * Obtaining a rotation angle θ expressed by:
Figure SMS_23
and taking the rough matching pose formed by the rotation quantity theta and the position of the history point cloud as the initial pose of NDT matching, so as to obtain the accurate matching relative pose.
In this embodiment, the normal distribution transformation algorithm is a registration algorithm, i.e. three-dimensional point cloud matching similar to image matching. He applies to a statistical model of three-dimensional points, using standard optimization techniques to determine the optimal match between the federation and the point cloud. Compared with the traditional mode, the method has the advantages that all surrounding points are not traversed to search the nearest point in the registration process, the surrounding areas are segmented to represent the adjacent points, and the distribution of the points in each area is represented with probability, so that the algorithm speed is greatly improved.
The NDT algorithm is a distribution matching method based on probability point clouds, the point clouds to be matched and the point cloud data of the current frame are divided into a plurality of cubic grids in average, and each cubic grid carries a certain number of point clouds, and the formula is as follows:
Figure SMS_24
Figure SMS_25
wherein X and Y are point sets obtained by dividing the point cloud to be matched and the current point cloud into a cubic grid;
according to point cloud data in each point set in the point cloud to be matched, calculating a point set mean value and covariance of the point cloud to be matched, wherein the point set mean value and covariance are represented by the following formula:
Figure SMS_26
Figure SMS_27
mu is the mean value of each point set in the point cloud to be matched, and Σ is the covariance of each point set in the point cloud to be matched;
after the mean value and covariance of each point set of the point clouds to be matched are obtained, the current point cloud rotates and translates according to the predicted pose given by people to obtain the predicted point cloud, and at the moment, the joint probability of the predicted point cloud point set can be calculated under the same coordinate system with the matched point cloud, wherein the joint probability is calculated according to the following formula:
y′ i =T(p,y i )=Ry i +t
Figure SMS_28
Figure SMS_29
in which y' i Predicting a predicted point cloud obtained by rotating R and predicting displacement t for a first frame point cloud given by people;
f(X,y′ i ) The joint probability of each point set in X for the predicted point cloud;
psi is the joint probability of all point sets of the predicted point cloud;
when the predicted pose T (R, T) is found so that the value of psi is maximum, the NDT point cloud is successfully matched, and accurate laser odometer information can be obtained;
therefore, the parameters to be solved for the NDT algorithm are T (R, T), and the objective function is as follows:
Figure SMS_30
the specific implementation and application are as follows:
the improved NDT algorithm added with the repositioning is adopted to carry out experimental tests respectively, the experimental platform is a patrol robot of a three-dimensional laser radar sensor arranged on the velodyne-16, the positioning algorithm is operated on an industrial personal computer, and the industrial personal computer is provided with an Intel i9-10850CPU processor, and the memory is 4GB. The industrial personal computer is provided with a Linux (Ubuntu18.04) operating system and a Melodic ROS version, and the industrial personal computer is tested in an outdoor environment of a certain transformer substation. The resulting graph is shown in fig. 3 before adding the relocation algorithm.
As shown in fig. 3, the entire observation process is represented by a point cloud registration positioning laser odometer versus actual path map. After a disturbance signal is input to the currently positioned pose, the point cloud registration observation result deviates from the original path more and more until the point cloud registration observation result is far away from the map and cannot be observed continuously. This is because after the error occurs, there is no suitable feedback measure, resulting in error accumulation, and the observed result is much different from the actual path.
When a disturbance signal is input to the current positioning pose, the positioning deviation can be pulled back to the original path at the moment (150 ms) as shown in fig. 4, so that the real-time accuracy of the observation path is ensured.
As can be seen from the observation of the point cloud map in FIG. 4, when a certain position in the running path occurs, the signal is deviated, but the repositioning algorithm corrects the path positioning in time, so that a correct observation result is obtained. As for the correction condition of the whole path, as shown in fig. 5, the yellow path is a path obtained by the robot using laser positioning, and the blue path is a real path track of the robot.
As can be seen by comparing the graphs of fig. 5 from different angles of view, the paths obtained in fig. 5 (a) and (c) for the positioning paths not added with relocation are subject to the first disturbance signal, so that the whole positioning system collapses. Fig. 5 (b), (d) show that the positioning path added with repositioning can still be quickly adjusted back when being subjected to a plurality of disturbance signals.
In order to study the influence rule of related parameters of scan context algorithm on relocation process, different L's are adopted in the experiment max Values were subjected to comparative experiments, L max Representing the maximum segmentation radius chosen by Scan context. Setting L max =5,L max =10,L max Five experiments were performed =15. Each set of experiments was performed during the course of manually giving a disturbance signal during the positioning of the inspection robot, resulting in loss of robot positioning, and the time required for each time of the improved NDT algorithm (e.g., repositioning algorithm) was measured using the three different data described above, respectively, as shown in table 1 below.
Table 1 comparison of recovery speed for different parameter relocations
Figure SMS_31
As can be seen from Table 1, due to L max Is the maximum area radius selected by Scan context, if L is taken max Too small a value of (c) may result in that the matrix information of Scan context may not adequately reflect the surrounding features,so that the repositioning does not give good results. L (L) max Has great influence on the timeliness of adding repositioning, and selects proper L max The repositioning efficiency and achievement rate can be greatly accelerated.
According to the method, a repositioning (Scan Context) algorithm is added into an NDT positioning matching algorithm, so that a robot can realize rapid initial positioning at any position of a transformer substation, and when positioning information is lost due to noise and disturbance signals in the positioning process, automatic repositioning can be rapidly realized. In addition, the main parameter L of the relocation algorithm max Research on the influence rule of relocation shows that L max Too small a value increases the time taken for the relocation process. The substation field test shows that the relocation scheme of the inspection robot is effective and feasible.
The above additional technical features can be freely combined and superimposed by a person skilled in the art without conflict.
It will be understood that the invention has been described in terms of several embodiments, and that various changes and equivalents may be made to these features and embodiments by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. The repositioning method for the substation inspection robot is characterized by comprising the following steps of:
s1, conducting Scan context algorithm processing on key frames when a transformer substation is built, and obtaining a database;
s2, performing Scan context algorithm processing on the current point cloud, and performing rough matching on the current point cloud and the database to obtain an initial pose;
s3, inputting the obtained initial pose into an NDT algorithm for accurate matching to obtain the target pose.
2. The substation inspection robot repositioning method according to claim 1, characterized in that: the initial pose includes one or more of a rotation amount θ, a position of a history point cloud.
3. The substation inspection robot repositioning method according to claim 1, characterized in that: the Scan Context algorithm processing comprises the steps of partitioning point cloud, generating Scan Context, matching based on the Scan Context and calculating relative pose.
4. A substation inspection robot repositioning method according to claim 3, characterized in that: the segmentation point cloud comprises
Dividing the point cloud space into N along the radius increasing direction r The width of each ring is as follows:
Figure FDA0003823068580000011
wherein L is max Is the furthest distance from the entire area;
cutting each ring into N s The laser spot set after division can be re-expressed as:
Figure FDA0003823068580000012
wherein ρ is ij Representing the set of points in the partition unit of the jth sector of the ith ring.
5. The substation inspection robot repositioning method according to claim 4, wherein: the L is max In the range of 5-15.
6. A substation inspection according to claim 3The robot repositioning method is characterized in that: the generating of the Scan Context includes mapping the segmented point cloud to N r ×N s The matrix of (a) gets scan context, each row of the matrix represents a circle, each column of the matrix represents a sector, and the value of each element in the matrix represents the segmentation unit ρ ij Height maximum of all three-dimensional points in the (c).
7. A substation inspection robot repositioning method according to claim 3, characterized in that: the Scan Context-based matching includes
The distance function of the two frames Scan Context is defined as follows:
Figure FDA0003823068580000021
wherein I is cur Scan Context, I for the current Point cloud his For the Scan Context of the history point cloud,
Figure FDA0003823068580000022
for the j-th column in the current point cloud Scan Context,/column is in the current point cloud Scan Context>
Figure FDA0003823068580000023
The j-th column of the history point cloud Scan Context;
the history point cloud Scan Context I to be matched his Translating according to the columns to obtain N s Scan Context, in turn, is associated with the current point cloud Scan Context I cur Calculating the distance, and selecting the value with the smallest distance as a result, wherein the value is expressed as follows:
Figure FDA0003823068580000024
8. a substation inspection robot repositioning method according to claim 3, characterized in that: the calculating the relative pose includes
After the Scan Context of the current point cloud is matched with the Scan Context of the history point cloud, the position of the history point cloud is obtained, and D (I) cur ,I his ) Column translation vector n at time *
Figure FDA0003823068580000025
And according to the column translation vector n * Obtaining a rotation angle θ expressed by:
Figure FDA0003823068580000026
and taking the rough matching pose formed by the rotation quantity theta and the position of the history point cloud as the initial pose of NDT matching, so as to obtain the accurate matching relative pose.
9. The substation inspection robot repositioning method according to claim 1, characterized in that: the NDT algorithm comprises
The point cloud data to be matched and the point cloud data of the current frame are divided into a plurality of cubic grids in an average mode, and each cubic grid carries a certain number of point clouds according to the following formula:
Figure FDA0003823068580000031
Figure FDA0003823068580000032
wherein X and Y are point sets obtained by dividing the point cloud to be matched and the current point cloud into a cubic grid;
according to point cloud data in each point set in the point cloud to be matched, calculating a point set mean value and covariance of the point cloud to be matched, wherein the point set mean value and covariance are represented by the following formula:
Figure FDA0003823068580000033
Figure FDA0003823068580000034
mu is the mean value of each point set in the point cloud to be matched, and Σ is the covariance of each point set in the point cloud to be matched;
after the mean value and covariance of each point set of the point clouds to be matched are obtained, the current point cloud rotates and translates according to the predicted pose given by people to obtain the predicted point cloud, and at the moment, the joint probability of the predicted point cloud point set can be calculated under the same coordinate system with the matched point cloud, wherein the joint probability is calculated according to the following formula:
y′ i =T(p,y i )=Ry i +t
Figure FDA0003823068580000035
Figure FDA0003823068580000041
in which y' i Predicting a predicted point cloud obtained by rotating R and predicting displacement t for a first frame point cloud given by people;
f(X,y′ i ) The joint probability of each point set in X for the predicted point cloud;
psi is the joint probability of all point sets of the predicted point cloud;
when the predicted pose T (R, T) is found so that the value of psi is maximum, the NDT point cloud is successfully matched, and accurate laser odometer information can be obtained;
therefore, the parameters to be solved for the NDT algorithm are T (R, T), and the objective function is as follows:
Figure FDA0003823068580000042
10. the utility model provides a transformer substation inspection robot which characterized in that: a substation inspection robot repositioning method according to any of claims 1-9.
CN202211049224.2A 2022-08-30 2022-08-30 Substation inspection robot and repositioning method thereof Pending CN116229119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211049224.2A CN116229119A (en) 2022-08-30 2022-08-30 Substation inspection robot and repositioning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211049224.2A CN116229119A (en) 2022-08-30 2022-08-30 Substation inspection robot and repositioning method thereof

Publications (1)

Publication Number Publication Date
CN116229119A true CN116229119A (en) 2023-06-06

Family

ID=86583028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211049224.2A Pending CN116229119A (en) 2022-08-30 2022-08-30 Substation inspection robot and repositioning method thereof

Country Status (1)

Country Link
CN (1) CN116229119A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177974A (en) * 2021-05-19 2021-07-27 上海商汤临港智能科技有限公司 Point cloud registration method and device, electronic equipment and storage medium
CN113379915A (en) * 2021-07-05 2021-09-10 广东工业大学 Driving scene construction method based on point cloud fusion
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources
CN113763551A (en) * 2021-09-08 2021-12-07 北京易航远智科技有限公司 Point cloud-based rapid repositioning method for large-scale mapping scene
CN113778099A (en) * 2021-09-16 2021-12-10 浙江大学湖州研究院 Unmanned ship path planning method based on NDT algorithm and Hybrid A algorithm
CN114266821A (en) * 2021-11-26 2022-04-01 深圳市易成自动驾驶技术有限公司 Online positioning method and device, terminal equipment and storage medium
CN114283250A (en) * 2021-12-23 2022-04-05 武汉理工大学 High-precision automatic splicing and optimizing method and system for three-dimensional point cloud map
CN114862932A (en) * 2022-06-20 2022-08-05 安徽建筑大学 BIM global positioning-based pose correction method and motion distortion correction method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177974A (en) * 2021-05-19 2021-07-27 上海商汤临港智能科技有限公司 Point cloud registration method and device, electronic equipment and storage medium
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources
CN113379915A (en) * 2021-07-05 2021-09-10 广东工业大学 Driving scene construction method based on point cloud fusion
CN113763551A (en) * 2021-09-08 2021-12-07 北京易航远智科技有限公司 Point cloud-based rapid repositioning method for large-scale mapping scene
CN113778099A (en) * 2021-09-16 2021-12-10 浙江大学湖州研究院 Unmanned ship path planning method based on NDT algorithm and Hybrid A algorithm
CN114266821A (en) * 2021-11-26 2022-04-01 深圳市易成自动驾驶技术有限公司 Online positioning method and device, terminal equipment and storage medium
CN114283250A (en) * 2021-12-23 2022-04-05 武汉理工大学 High-precision automatic splicing and optimizing method and system for three-dimensional point cloud map
CN114862932A (en) * 2022-06-20 2022-08-05 安徽建筑大学 BIM global positioning-based pose correction method and motion distortion correction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GISEOP KIM 等: "Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map", 《2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》, 31 October 2018 (2018-10-31) *
PETER BIBER 等: "The Normal Distributions Transform: A New Approach to Laser Scan Matching", 《PROCEEDINGS OF THE 2003 IEEVRSJ INU. CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》, 31 December 2003 (2003-12-31) *
王康: "基于深度学习的全景图像和点云融合目标检测与定位***", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, 15 October 2021 (2021-10-15), pages 23 - 24 *
王超: "基于激光雷达的隧道移动机器人三维定位与重构绘图技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 June 2022 (2022-06-15) *

Similar Documents

Publication Publication Date Title
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
Wang et al. F-loam: Fast lidar odometry and mapping
CN111459166B (en) Scene map construction method containing trapped person position information in post-disaster rescue environment
Li et al. Efficient laser-based 3D SLAM for coal mine rescue robots
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
JP2021152662A (en) Method and device for real-time mapping and location
US11790542B2 (en) Mapping and localization system for autonomous vehicles
CN112305559A (en) Power transmission line distance measuring method, device and system based on ground fixed-point laser radar scanning and electronic equipment
EP4160474A1 (en) Robot relocalization method and apparatus, and storage medium and electronic device
CN111862200B (en) Unmanned aerial vehicle positioning method in coal shed
Liang et al. A novel skyline context descriptor for rapid localization of terrestrial laser scans to airborne laser scanning point clouds
Dickenson et al. Rotated rectangles for symbolized building footprint extraction
CN113759928B (en) Mobile robot high-precision positioning method for complex large-scale indoor scene
Javanmardi et al. Autonomous vehicle self-localization based on probabilistic planar surface map and multi-channel LiDAR in urban area
CN116222579B (en) Unmanned aerial vehicle inspection method and system based on building construction
CN112733971A (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN116229119A (en) Substation inspection robot and repositioning method thereof
CN116295426A (en) Unmanned aerial vehicle autonomous exploration method and device based on three-dimensional reconstruction quality feedback
CN116127405A (en) Position identification method integrating point cloud map, motion model and local features
Huang et al. Research on lidar slam method with fused point cloud intensity information
Gu et al. A spatial alignment method for UAV LiDAR strip adjustment in nonurban scenes
Li et al. Efficient laser-based 3D SLAM in real time for coal mine rescue robots
Serrano et al. YOLO-Based Terrain Classification for UAV Safe Landing Zone Detection
Cai et al. Positioning and Mapping Technology for Substation Inspection Robot Based on Gaussian Projection
CN111709432B (en) InSAR ground point extraction method, device, server and storage medium in complex urban environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination