CN115542362A - High-precision space positioning method, system, equipment and medium for electric power operation site - Google Patents

High-precision space positioning method, system, equipment and medium for electric power operation site Download PDF

Info

Publication number
CN115542362A
CN115542362A CN202211524412.6A CN202211524412A CN115542362A CN 115542362 A CN115542362 A CN 115542362A CN 202211524412 A CN202211524412 A CN 202211524412A CN 115542362 A CN115542362 A CN 115542362A
Authority
CN
China
Prior art keywords
positioning
power operation
uwb
model
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211524412.6A
Other languages
Chinese (zh)
Inventor
谢晓娜
常政威
陈明举
邓元实
熊兴中
谢正军
蒲维
吴杰
丁宣文
张葛祥
张江林
刘甲甲
王振玉
徐智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd
Original Assignee
Chengdu University of Information Technology
Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology, Electric Power Research Institute of State Grid Sichuan Electric Power Co Ltd filed Critical Chengdu University of Information Technology
Priority to CN202211524412.6A priority Critical patent/CN115542362A/en
Publication of CN115542362A publication Critical patent/CN115542362A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/46Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being of a radio-wave signal type
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-precision space positioning method, a high-precision space positioning system, high-precision space positioning equipment and a high-precision space positioning medium for an electric power operation site, which belong to the technical field of space positioning and comprise the steps of obtaining the positioning of an electric power operation moving target based on a visual positioning technology; according to the positioning of the power operation moving target, carrying out image analysis on the power operation moving target to obtain characteristic information of the power operation moving target and constructing a three-dimensional model of the power operation moving target; and refining the three-dimensional model of the power operation moving target according to the characteristic information to obtain a high-precision three-dimensional model and updating the positioning of the high-precision three-dimensional model in a three-dimensional scene in real time. By matching UWB or Beidou positioning tags on the dynamic targets, image processing is carried out on the basis of positioning of the moving targets, fine reconstruction of the moving targets in a three-dimensional scene is achieved, and therefore real-time position information of the dynamic targets is obtained and the position state of a dynamic target model is updated in the three-dimensional scene in real time.

Description

High-precision space positioning method, system, equipment and medium for electric power operation site
Technical Field
The invention relates to the technical field of space positioning, in particular to a high-precision space positioning method, a high-precision space positioning system, high-precision space positioning equipment and high-precision space positioning media for an electric power operation site.
Background
The research of vision-based target detection, identification and tracking algorithm is a hot spot in the field of artificial intelligence, and great progress is made along with the wide application of deep learning in recent years. The deep learning technology is gradually applied to an electric power production environment, and recognition of some targets such as insulators, overhead arcs, transformer substation foreign bodies and equipment temperature is achieved, however, transformer substation scenes are complex, and key target recognition of all links of electric power operation needs to be deeply researched.
The fusion of real scene video information into a three-dimensional virtual model is a branch of virtual reality technology, or a development stage of virtual reality. The three-dimensional video fusion technology is used for matching and fusing one or more videos of a camera image sequence and a three-dimensional virtual scene related to the videos to generate a new dynamic virtual scene or model related to the scene, so that the fusion of the virtual scene and a real-time video is realized. The three-dimensional video fusion technology can realize the fusion application of resources such as small-range or local three-dimensional scenes and videos by relying on an independent three-dimensional engine, and can also realize the visual fusion application of three-dimensional geographic information in a global wide-area range by relying on a three-dimensional geographic information system.
In the electric power operation field, accurate and effective positioning of the position of an operator is an important basis for preventing accidents and guaranteeing the safety of workers. At present, the means for positioning the position of an operator on an electric power operation site is realized by adopting a GPS, a Beidou positioning system and a UWB positioning system, but the accurate spatial position information cannot be extracted only by depending on the GPS, the Beidou positioning system and the UWB positioning system, and meanwhile, the problems of shielding, target posture change, illumination and the like exist on the electric power operation site, and the positioning of an operation target on the electric power operation site can also be influenced.
In summary, the research on the vision-based target detection, recognition and tracking algorithm has made a certain research progress, but how to deal with the influence of the problems such as occlusion, target posture change, illumination and the like is still the focus of the research. The video image data processed by the existing visual positioning algorithm is the projection of a target from a three-dimensional space to a two-dimensional space, and accurate spatial position information cannot be extracted.
The problems of the prior art are as follows:
the video image data processed by the existing visual positioning algorithm for the power operation site is the projection of a target from a three-dimensional space to a two-dimensional space, and accurate space position information of the three-dimensional space cannot be extracted.
Disclosure of Invention
The technical problem to be solved by the application is that video image data processed by the existing visual positioning algorithm for the electric power operation field is the projection of a target from a three-dimensional space to a two-dimensional space, and accurate spatial position information of the three-dimensional space cannot be extracted.
The application is realized by the following technical scheme:
a first aspect of the present application provides a high-precision spatial positioning method for an electric power operation site, comprising
S1, acquiring the positioning of a power operation moving target based on a visual positioning technology; the UWB auxiliary visual positioning technology is adopted for indoor electric power operation moving targets, and the Beidou auxiliary visual positioning technology is adopted for outdoor electric power operation moving targets; the UWB-assisted visual positioning technology comprises UWB-assisted passive video positioning;
s2, carrying out image analysis on the power operation moving target according to the positioning of the power operation moving target to obtain characteristic information of the power operation moving target and construct a three-dimensional model of the power operation moving target;
and S3, refining the three-dimensional model of the power operation moving target according to the characteristic information to obtain a high-precision three-dimensional model and updating the positioning of the high-precision three-dimensional model in the three-dimensional scene in real time.
In above-mentioned technical scheme, adopt UWB to indoor electric power operation moving object to assist visual positioning technique, adopt big dipper to assist visual positioning technique to outdoor electric power operation moving object, through combining together UWB location technique, big dipper location technique and visual positioning technique simple, can acquire the location of electric power operation moving object in three-dimensional space, reduce electric power operation scene and disturb, shelter from the scheduling problem to realize centimetre level location. The UWB tag or the Beidou positioning tag is worn on the electric power operation moving target, the moving target is subjected to image analysis on the basis of moving target positioning, fine reconstruction of the moving target in a three-dimensional scene is achieved, real-time position information of the moving target is obtained, the position state of a moving target model is updated in the three-dimensional scene in real time, and accurate positioning of the electric power operation moving target in a three-dimensional space is achieved.
In an alternative embodiment, the UWB-assisted passive video positioning method comprises:
positioning the power operation moving target through a binocular vision system formed by two cameras to obtain position information of the power operation moving target in space;
positioning the power operation moving target based on the UWB model to obtain UWB positioning information of the power operation moving target;
and integrating the position information of the power operation moving target in the space and the UWB positioning information of the power operation moving target to obtain the positioning of the power operation moving target.
In an alternative embodiment, the method for integrating the position information of the power operation moving object in the space and the UWB positioning information of the power operation moving object is as follows:
taking the position and the course measured by the binocular vision system and the position measured by the UWB model as observed values, and establishing a UWB/vision fusion observation equation; the UWB/visual fusion observation equation is as follows:
Figure 315049DEST_PATH_IMAGE001
in the above formula, the first and second carbon atoms are,
Figure 246227DEST_PATH_IMAGE002
representing the plane coordinates of the binocular vision system vision measurement,
Figure 62479DEST_PATH_IMAGE003
representing the planar coordinates of the UWB measurements,
Figure 889752DEST_PATH_IMAGE004
indicating the position measurement error of the binocular vision system,
Figure 738760DEST_PATH_IMAGE005
indicating the error in the UWB position measurement,
Figure 457317DEST_PATH_IMAGE006
denotes a deflection angle between plane coordinates measured by a binocular vision system and plane coordinates measured by UWB,
Figure 911432DEST_PATH_IMAGE007
representing the intermediate matrix.
In an alternative embodiment, the UWB model is constructed as follows:
establishing an improved robust EKF model, and adopting the improved robust EKF model as a standard model of the UWB model;
judging whether the improved robust EKF model has gross errors or not by adopting a statistical method;
if gross error exists, calling the robust EKF model as an improved robust EKF model;
if gross error does not exist, the EKF model is invoked as a modified robust EKF model.
In an optional embodiment, the positioning method of the Beidou auxiliary visual positioning technology comprises the following steps:
acquiring an interested area based on a Kalman filtering model and a predicted position of an operation boundary line;
carrying out image edge detection on the region of interest to obtain a positioning region;
constructing a multi-view vision measurement model through a multi-view vision sensor, and constructing a multi-view vision coordinate in a positioning area through the multi-view vision measurement model;
optimizing the multi-view visual coordinate constructed in the positioning area through the multi-view visual measurement model based on a weighted LM algorithm to obtain an optimized multi-view visual coordinate;
based on the Beidou positioning technology, the optimized multi-view visual coordinate is converted into the coordinate of a global positioning system.
In an alternative embodiment, based on the kalman filter model, the method for dynamically acquiring the region of interest by the plurality of vision sensors is as follows:
predicting the current straight line position based on Kalman filtering to obtain a first dynamic region of interest;
and predicting a column number of the operation boundary line in the image coordinate based on a projection method, and obtaining a second dynamic region of interest by taking the column number as a reference and combining the current camera calling condition.
In an alternative embodiment, based on the weighted LM algorithm, the method for optimizing the multi-view vision coordinates constructed in the positioning area by the multi-view vision measurement model is as follows:
carrying out nonlinear optimization on image coordinates obtained after the object points are transformed to the ith multi-view vision sensor by adopting the minimized reprojection error;
normalizing the distance from the multi-view vision sensor to the object, and converting the reciprocal of the distance after normalization into a weighting factor;
and constructing an objective function through the weighting factors, and substituting the multi-view visual coordinates into the objective function to calculate to obtain optimized multi-view visual coordinates.
A second aspect of the present application provides a high-precision spatial positioning system for an electric power work site, comprising
The system comprises a UWB auxiliary video positioning module, a UWB auxiliary active video positioning module and a UWB module, wherein the UWB auxiliary active video positioning module is used for acquiring the positioning of a moving target of indoor electric power operation, and comprises an initialization unit for initializing an environmental image and IMU data, a vision/inertia combination unit for acquiring a vision coordinate and an ultra-wideband unit for acquiring the UWB coordinate;
the Beidou auxiliary visual positioning module is used for acquiring the positioning of an outdoor electric power operation moving target, and comprises a dynamic region of interest module used for reducing the extraction of environmental factors to an operation boundary line, a multi-view visual module used for constructing multi-view visual region coordinates and a Beidou positioning module used for converting the multi-view visual region coordinates into global positioning system coordinates.
A third aspect of the present application provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor is configured to implement a high-precision spatial localization method for an electric power job site when executing the program.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program for implementing a method for high-precision spatial localization of an electric power work site when executed by a processor.
Compared with the prior art, the application has the following advantages and beneficial effects:
the UWB or Beidou positioning tag is matched on the dynamic target, the image processing is carried out on the dynamic target on the basis of the positioning of the moving target, the fine reconstruction of the moving target in a three-dimensional scene is realized, the real-time position information of the dynamic target is obtained, the position state of a dynamic target model is updated in the three-dimensional scene in real time, and the error between the reconstructed dynamic model and the actual size is proved to be less than 1% through verification experiments.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, the drawings that are required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and that those skilled in the art may also derive other related drawings based on these drawings without inventive effort. In the drawings:
fig. 1 is a schematic flowchart of a high-precision spatial positioning method for an electric power work site according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of coordinate transformation for multi-view vision integration Beidou positioning according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an assisted navigation tracking result in a test field at a speed of 0.4m/s according to an embodiment of the present application;
FIG. 4 is a diagram illustrating an assisted navigation tracking result in a test field at a speed of 0.8m/s according to an embodiment of the present application;
fig. 5 is a graph of the result of assisted navigation tracking in a test field at a speed of 1.2m/s according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example 1
The embodiment provides a high-precision space positioning method for an electric power operation site, wherein the flow of the method is shown in fig. 1, and the method comprises the following steps:
s1, acquiring positioning information of the power operation moving target based on a visual positioning technology.
The UWB auxiliary visual positioning technology is adopted for indoor electric power operation moving targets, and the Beidou auxiliary visual positioning technology is adopted for outdoor electric power operation moving targets; UWB-assisted visual positioning techniques include UWB-assisted active video positioning and UWB-assisted passive video positioning.
For the wearable device which is not configured with the visual system, the positioning of the moving target can be realized by combining a camera installed in a transformer substation with the UWB technology.
Wherein, an improved robust EKF model is adopted as a standard model of the UWB model.
In the improved robust EKF localization model, the robust EKF gain matrix is constructed as follows:
Figure 690032DEST_PATH_IMAGE008
Figure 213286DEST_PATH_IMAGE009
and
Figure 735534DEST_PATH_IMAGE010
is a robust parameter that is,
Figure 44156DEST_PATH_IMAGE011
taking out 2.5-3.5 portions of the raw materials,
Figure 993657DEST_PATH_IMAGE012
taking 3.5-4.5 portions of the raw materials,
Figure 502743DEST_PATH_IMAGE013
is a gain matrix of the EKF model,
Figure 828682DEST_PATH_IMAGE014
for the gain matrix after the robust optimization,
Figure 991810DEST_PATH_IMAGE015
is the residual correlation threshold.
Figure 112213DEST_PATH_IMAGE016
Figure 875638DEST_PATH_IMAGE017
Representing the observation vector dimension.
Figure 536427DEST_PATH_IMAGE018
Figure 757324DEST_PATH_IMAGE019
And
Figure 314207DEST_PATH_IMAGE020
respectively representing observation vectors
Figure 800814DEST_PATH_IMAGE017
The prediction residual, the redundant observation component, and the measurement standard deviation. The redundant observed components can be expressed as:
Figure 999714DEST_PATH_IMAGE021
Figure 137435DEST_PATH_IMAGE022
and
Figure 68482DEST_PATH_IMAGE023
respectively, a covariance matrix of the residual vectors and a weight matrix of the observed values. The state prediction value is given by the iteration number and the state number of each updating iteration
Figure 275341DEST_PATH_IMAGE024
The following are:
Figure 746773DEST_PATH_IMAGE025
prediction residual
Figure 739000DEST_PATH_IMAGE026
The following are:
Figure 372107DEST_PATH_IMAGE027
Figure 95956DEST_PATH_IMAGE028
the state prediction value of iteration at the time t is determined by the state filtering value at the time t-1 and the prediction residual error thereof,
Figure 105500DEST_PATH_IMAGE029
A calculated assignment indicating the end of an iteration at a time,
Figure 952233DEST_PATH_IMAGE030
a predicted value at the initial time is shown,
Figure 21821DEST_PATH_IMAGE031
is a matrix of the distribution characteristics and,
Figure 203272DEST_PATH_IMAGE032
is observed as a measured value. According to the above formula
Figure 16507DEST_PATH_IMAGE033
Calculating an equivalent gain matrix, wherein the robust filtering value is as follows:
Figure 983326DEST_PATH_IMAGE034
Figure 958236DEST_PATH_IMAGE035
is an anti-difference EKF gain matrix,
Figure 128448DEST_PATH_IMAGE036
is the calculated value of the end of the iteration at time t,
Figure 479795DEST_PATH_IMAGE037
is a predicted value at the time t predicted by the time t-1. If it is not
Figure 301120DEST_PATH_IMAGE036
And
Figure 712510DEST_PATH_IMAGE037
if the difference is less than the given limit difference, the iteration ends. When the t =1, the control unit is configured to,
Figure 868554DEST_PATH_IMAGE038
is the assignment of the standard EKF at time k. The posterior covariance matrix is:
Figure 23592DEST_PATH_IMAGE039
Figure 699424DEST_PATH_IMAGE040
representing the dimensions of the state vector(s),
Figure 281715DEST_PATH_IMAGE041
is the final equivalent kalman filter gain matrix at the end of the iteration,
Figure 423590DEST_PATH_IMAGE042
is a covariance matrix.
In order to improve the positioning angle, a visual/UWB fusion EKF positioning model considering a visual scale factor and an initial direction is constructed, and the state and the measurement equation are respectively expressed as follows:
Figure 116739DEST_PATH_IMAGE043
in the above-mentioned formula, the compound has the following structure,
Figure 912657DEST_PATH_IMAGE044
and
Figure 665849DEST_PATH_IMAGE045
respectively represent a state equation and a measurement equation,
Figure 530906DEST_PATH_IMAGE046
and
Figure 27746DEST_PATH_IMAGE047
are respectively covariance matrix
Figure 678171DEST_PATH_IMAGE048
And
Figure 602264DEST_PATH_IMAGE049
an independent, zero-mean, gaussian noise process,
Figure 721661DEST_PATH_IMAGE050
is a state transition matrix and H is a matrix of distribution characteristics.
Figure 756613DEST_PATH_IMAGE051
In the above formula, the first and second carbon atoms are,
Figure 261544DEST_PATH_IMAGE052
Figure 356539DEST_PATH_IMAGE053
which represents the coordinates of a plane or plane,
Figure 196188DEST_PATH_IMAGE054
which is indicative of the speed of the pedestrian,
Figure 34831DEST_PATH_IMAGE055
which represents the angle of the direction of movement,
Figure 659847DEST_PATH_IMAGE056
the scale-scale ambiguity is expressed in terms of,
Figure 925743DEST_PATH_IMAGE057
representing the deflection angle between the visually computed planar coordinates and the UWB computed planar coordinates.
According to the error equation of vision and UWB, the corresponding state model is:
Figure 954486DEST_PATH_IMAGE058
if the visually measured position, heading, and UWB measured position are taken as observations, then the UWB/visual fusion observation equation can be expressed as:
Figure 331241DEST_PATH_IMAGE059
in the above formula, the first and second carbon atoms are,
Figure 810764DEST_PATH_IMAGE002
representing the plane coordinates of the binocular vision system vision measurement,
Figure 496829DEST_PATH_IMAGE003
the planar coordinates of the UWB measurements are represented,
Figure 61802DEST_PATH_IMAGE004
indicating the position measurement error of the binocular vision system,
Figure 242248DEST_PATH_IMAGE005
indicating the error in the measurement of the UWB position,
Figure 841856DEST_PATH_IMAGE006
representing the deflection angle between the plane coordinates measured by the binocular vision system and the plane coordinates measured by the UWB.
In an optional embodiment, for outdoor electric power operation moving targets, a Beidou auxiliary visual positioning technology can be adopted. The device is limited by Beidou positioning precision and movement and geographic environment of field operation construction equipment, meter-level precision positioning can be realized only by simply depending on Beidou positioning at present, and the centimeter-level precision positioning requirement of a dynamic power operation scene cannot be met. Therefore, the Beidou positioning technology and the multi-view vision auxiliary positioning technology are combined to provide positioning accuracy.
As shown in FIG. 2, the coordinate system in the figure is the coordinate system established after the iteration of the optimized weighted LM algorithm, and a point in the coordinate system is taken
Figure 465867DEST_PATH_IMAGE060
Knowing a point on the driving routePIn that
Figure 252557DEST_PATH_IMAGE061
The coordinates in the coordinate system are
Figure 236694DEST_PATH_IMAGE062
G point is at
Figure 690809DEST_PATH_IMAGE063
The coordinates in the coordinate system are
Figure 453097DEST_PATH_IMAGE064
To do so byGEstablishing point as origin
Figure 992663DEST_PATH_IMAGE065
Coordinate system, thenPIs stippled to
Figure 514911DEST_PATH_IMAGE065
Coordinates in a coordinate system
Figure 823533DEST_PATH_IMAGE066
Is composed of
Figure 481958DEST_PATH_IMAGE067
Wherein, the first and the second end of the pipe are connected with each other,
Figure 508820DEST_PATH_IMAGE066
the unit of the middle coordinate is pixel, from which the unit is lengthmCoordinates of (A), (B)x ly l ):
Figure 552868DEST_PATH_IMAGE068
With fixed points on the cameraCEstablishing a coordinate system for the origin, which can be measuredCPoint to the groundGIs counted by a horizontal distance ofl 1 Is 2.5m, it is known thatPDotO c The coordinates in the coordinate system are:
Figure 981575DEST_PATH_IMAGE069
construction machinery is installed big dipper and is decidedPosition navigation device, which can obtain vehicle control point by calculationMThe coordinate of the point in the geodetic plane coordinate system O is the coordinate of m and the heading angle of the vehicle. To be provided withMEstablishing coordinate system by taking point as originO m Fixed point on the measuring sum cameraO c A distance ofl 2PIs spotted onO m The coordinates in the coordinate system are:
Figure 101978DEST_PATH_IMAGE070
by means of a positioning systemMThe coordinates in the geodetic coordinate system O are (x wy w ) Obtained by calculationPCoordinates of the point in the world coordinate system (x oy o ) Namely:
Figure 350557DEST_PATH_IMAGE071
in summary, a point on the route path is knownPImage coordinates of (a), (b)x py p ) The coordinates of the GPS can be obtained (x oy o ) Namely:
Figure 965340DEST_PATH_IMAGE072
wherein the content of the first and second substances,
Figure 248554DEST_PATH_IMAGE073
is a matrix
Figure 539858DEST_PATH_IMAGE074
Figure 275733DEST_PATH_IMAGE075
Is a matrix
Figure 192742DEST_PATH_IMAGE076
Figure 127200DEST_PATH_IMAGE077
Is a matrix
Figure 589405DEST_PATH_IMAGE078
Figure 812576DEST_PATH_IMAGE079
Is a matrix
Figure 284009DEST_PATH_IMAGE080
Figure 24038DEST_PATH_IMAGE081
Is a matrix
Figure 657145DEST_PATH_IMAGE082
In multi-view vision measurement systems, non-linear optimization is usually performed in a way that minimizes the reprojection error, i.e.
Figure 367612DEST_PATH_IMAGE083
In the above formula:
Figure 377156DEST_PATH_IMAGE084
representing the coordinates of the image obtained after the object point is transformed to the ith camera before adjustment,
Figure 207578DEST_PATH_IMAGE085
and the coordinate of the image obtained after the adjusted object point is transformed to the ith camera is shown. When the parameters of each camera, the measurement environment and other factors are the same, the distance from the camera to the object point has obvious influence on imaging noise, and the farther the distance is, the higher the noise is, otherwise, the smaller the noise is. When designing an objective function, the reciprocal of the camera-to-object point distance is converted into a weighting factor, and meanwhile, in order to increase the comparability between different cameras, the distance information is normalized.
Figure 277165DEST_PATH_IMAGE086
In the above formula, the first and second carbon atoms are,
Figure 209349DEST_PATH_IMAGE087
in order to be a weighting factor, the weighting factor,
Figure 22584DEST_PATH_IMAGE088
the distance of the object point o to the camera i,
Figure 474556DEST_PATH_IMAGE089
is the coordinates of the point of the oi object,
Figure 715045DEST_PATH_IMAGE090
coordinates of the ci camera. Thus, the final objective function is
Figure 134525DEST_PATH_IMAGE091
Will be provided with
Figure 485872DEST_PATH_IMAGE092
Substituting the target function and converting into equation system form to obtain
Figure 556465DEST_PATH_IMAGE093
In the above formula x w 、y w 、z w A unit vector representing a three-dimensional space;
Figure 764592DEST_PATH_IMAGE094
is a parameter coefficient, F ui 、F vi For the optimized sub-targeting function, u i ,v i Intermediate variables for iterative solution.
The calculation formula relates to
Figure 671368DEST_PATH_IMAGE095
A derivative of (a), a derived Jacobian matrix of
Figure 826406DEST_PATH_IMAGE096
In the above formula, the values in the Jacobian matrix J are respectively F ui 、F vi About x w 、y w 、z w The first partial derivative of the direction. By substituting the above formula into the following formula, the increment can be obtained
Figure 502238DEST_PATH_IMAGE097
Figure 35594DEST_PATH_IMAGE098
In the above equation, μ is the updated damping coefficient. Therefore, the coordinates constructed in the positioning area through the multi-view vision measurement model are optimized by adopting a weighted LM algorithm, and the specific calculation steps are as follows:
(1) Calculating an initial value of the coordinates of the object points according to an orthogonal projection method, setting an iteration termination constant to be e, an actual descending effect threshold to be epsilon, iteration times to be k, and initializing mu;
(2) Using information about the current estimated world coordinates of the object point and camera parameters
Figure 226404DEST_PATH_IMAGE095
Solving a jacobian matrix J by the derivative of the equation, and calculating mu;
(3) Calculating an increment delta P;
(4) Calculating rho and evaluating the current descending effect;
(5) If ρ < ε, μ =0.5 μ, and returns to step (3), otherwise, continues to perform step (6);
(6) If it is
Figure 919554DEST_PATH_IMAGE099
Or the number of iterations is greater than or equal tokStopping iteration and outputting the optimized result, otherwise, returning to the step (2) to enter the next iteration.
In order to analyze the real-time performance of the operation boundary line extraction algorithm, the present embodiment counts the image processing steps of 100 construction process pictures before and after the algorithm improvement and the average time of the whole image processing process, and the results are shown in the following table:
Figure 715471DEST_PATH_IMAGE100
as can be seen from the data in the table above, the average processing time of each image is shortened from 0.176087s before algorithm improvement to 0.064547s, which is 63.3% shorter than the original processing time, and the processing time of the image after algorithm improvement can better meet the requirement of the navigation system on the image processing time.
The map of the result of the assisted navigation tracking in the field is shown in fig. 3 when the speed is 0.4m/s, and the result of the assisted navigation in the field plot when the speed is 0.4m/s is shown in the following table:
Figure 452352DEST_PATH_IMAGE101
as can be seen from fig. 3 and the above table, when the construction machine performs the straight line work in the test site at a speed of 0.4m/s, the maximum lateral deviation is 6.5cm, the average value is 0.49cm, and the standard deviation is 3.05cm after the first work to the lateral deviation of 0 in the case where the initial deviation is 30 cm. The result shows that the system simulation track and the actual position of the construction machine converge by continuously updating Beidou positioning data and visual picture information under the condition of larger initial deviation in a test field. Along with the prolonging of the operation tracking time, the more stable the positioning precision of the simulation operation is, and the auxiliary navigation effect can well meet the requirements.
The map of the result of the assisted navigation tracking in the field is shown in fig. 4 when the speed is 0.8m/s, and the result of the assisted navigation in the field plot when the speed is 0.8m/s is shown in the following table:
Figure 68141DEST_PATH_IMAGE102
as can be seen from fig. 4 and the above table, when the construction machine performs the straight line work in the test site at a speed of 0.8m/s, in the case where the initial deviation is 30cm, the maximum lateral deviation is 13.7cm, the average value is 0.82cm, and the standard deviation is 5.98cm after the first work to the lateral deviation of 0. And adjusting the pose of the construction machine under the condition of larger initial deviation, so that the construction machine walks according to the predicted track. The result shows that the system simulation track and the actual position of the construction machine converge by continuously updating Beidou positioning data and visual picture information under the condition of larger initial deviation in a test field. Along with the extension of the operation tracking time, the more stable the positioning precision of the simulation operation is, and the auxiliary navigation effect can well meet the design requirement.
The map of the result of the assisted navigation tracking in the field is shown in fig. 5 when the speed is 1.2m/s, and the result of the assisted navigation in the field plot when the speed is 0.8m/s is shown in the following table:
Figure 564982DEST_PATH_IMAGE103
as can be seen from fig. 5 and the above table, when the construction machine performs the straight line work in the test site at a speed of 1.2m/s, the maximum lateral deviation is 19.2cm, the average value is 1.40cm, and the standard deviation is 7.06cm after the first work to the lateral deviation of 0 in the case where the initial deviation is 30 cm. The result can basically meet the design requirement of the auxiliary navigation, but small-range obvious fluctuation occurs in the auxiliary navigation process, and subsequent analysis through wheel marks is caused by brake steering of construction machinery in the form process, and the deviation does not influence the application effect of the Beidou imitation auxiliary visual positioning technology in the simulation system.
The field test result shows that when the construction machine respectively runs at the speeds of 0.4m/s, 0.8m/s and 1.2m/s for experiments, the maximum values of the transverse deviation are respectively 6.5cm, 13.7cm and 19.2cm, the average values of the deviation are respectively 0.49cm,0.87cm and 1.40cm, the standard deviation difference is respectively 3.05cm,5.98cm and 7.06cm, the positioning precision is higher than that of the positioning precision of the Beidou navigation which is only relied on, and the positioning precision requirement of the simulation system for assisting navigation under the electric power operation environment can be met.
And S2, performing image analysis on the positioned electric power operation moving target to obtain characteristic information of the electric power operation moving target and construct a three-dimensional model of the electric power operation moving target.
And a live-action three-dimensional model scanning technology, a point cloud fusion technology and a refined modeling technology are adopted to model various maintenance vehicles and operators, and the real structures and textures of the vehicles and the operators are restored. The maintenance vehicle models are named according to models, and a dynamic target fine model library is formed by the maintenance vehicle models and the operator models.
For the maintenance vehicle, the model code is identified through an image identification technology and is used as characteristic information.
Specifically, the steps are as follows:
A. and acquiring vehicle information from a real-time video stream image of the monitoring video in a dynamic acquisition mode.
B. And preprocessing collected inspection vehicle model code images such as noise filtering, contrast enhancement, image scaling and the like.
C. And detecting the license plate by adopting algorithms such as projection analysis, connected domain analysis, machine learning and the like according to the information such as the texture feature, the color feature, the shape feature and the like of the license plate. The projection analysis method is characterized in that the number of times of alternate appearance of license plate characters and the background is more than that of other parts, and the model code of the maintenance vehicle is positioned through projection analysis of images in the horizontal direction and the vertical direction.
And the connected domain analysis is used for positioning the model code of the maintenance vehicle by detecting and combining the connected domains according to the characteristic that each character in the model code of the maintenance vehicle is a connected domain and the structures and colors of the connected domains are consistent.
D. After the maintenance vehicle model code area is extracted, the maintenance vehicle model code area needs to be divided by taking one character as a unit.
E. And carrying out normalization processing on the gray level image of the segmented character, extracting the characteristic, then carrying out machine learning or matching with a character database template, and finally selecting the result with the highest matching degree as the recognition result.
For the operator, apparent features and geometric features having individual representativeness are recognized as feature information by an image recognition technique.
Specifically, the steps are as follows:
A. and establishing a face image database.
B. And extracting the facial image characteristics in the facial image database according to the facial characteristic information.
And S3, refining the three-dimensional model of the power operation moving target according to the characteristic information, and acquiring the high-precision three-dimensional model and positioning and updating the high-precision three-dimensional model in a three-dimensional scene.
The dynamic target refined model library comprises various maintenance vehicle models and operation personnel models. For the maintenance vehicle, extracting the model code identified in the database; and for the operator, extracting through the recognized person name, and adjusting the three-dimensional model according to the height in the operator information.
The method is characterized in that a UWB or Beidou positioning tag is arranged on a dynamic target, a series of algorithm operations including image acquisition, preprocessing, model coding positioning, character segmentation, character recognition, result output and the like are adopted on the basis of positioning the moving target, fine reconstruction of the moving target in a three-dimensional scene is achieved, real-time position information of the dynamic target is obtained, the position state of a dynamic target model is updated in the three-dimensional scene in real time, and verification experiments prove that the error between the reconstructed dynamic model and the actual size is smaller than 1%.
Example 2
The present embodiment provides a high-precision spatial positioning system for an electric power work site based on embodiment 1, including:
the system comprises a UWB auxiliary video positioning module, a UWB auxiliary video positioning module and a video processing module, wherein the UWB auxiliary video positioning module is used for acquiring the positioning of a moving target of indoor power operation, and comprises an initialization unit for initializing an environmental image and IMU data, a vision/inertia combination unit for acquiring a vision coordinate and an ultra-wideband unit for acquiring the UWB coordinate;
the Beidou auxiliary visual positioning module is used for acquiring the positioning of an outdoor electric power operation moving target, and comprises a dynamic region of interest module used for reducing the extraction of environmental factors to an operation boundary line, a multi-view visual module used for constructing multi-view visual region coordinates and a Beidou positioning module used for converting the multi-view visual region coordinates into global positioning system coordinates.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. The high-precision space positioning method for the electric power operation site is characterized by comprising the following steps of:
s1, acquiring the positioning of a power operation moving target based on a visual positioning technology; the UWB auxiliary visual positioning technology is adopted for indoor electric power operation moving targets, and the Beidou auxiliary visual positioning technology is adopted for outdoor electric power operation moving targets; the UWB-assisted visual positioning technology comprises UWB-assisted passive video positioning;
s2, performing image analysis on the power operation moving target according to the positioning of the power operation moving target to obtain characteristic information of the power operation moving target and construct a three-dimensional model of the power operation moving target;
and S3, refining the three-dimensional model of the power operation moving target according to the characteristic information to obtain a high-precision three-dimensional model and updating the positioning of the high-precision three-dimensional model in the three-dimensional scene in real time.
2. A method of high accuracy spatial location of an electrical work site as defined in claim 1, wherein said UWB assisted passive video location method comprises:
positioning the power operation moving target through a binocular vision system formed by two cameras to obtain position information of the power operation moving target in space;
positioning the power operation moving target based on the UWB model to obtain UWB positioning information of the power operation moving target;
and integrating the position information of the power operation moving target in the space and the UWB positioning information of the power operation moving target to obtain the positioning of the power operation moving target.
3. A high-precision spatial positioning method for an electric power work site according to claim 2, wherein the method of integrating the positional information of the electric power work moving object in space and the UWB positioning information of the electric power work moving object is as follows:
taking the position and the course measured by the binocular vision system and the position measured by the UWB model as observed values, and establishing a UWB/vision fusion observation equation; the UWB/visual fusion observation equation is as follows:
Figure 882651DEST_PATH_IMAGE001
in the above formula, the first and second carbon atoms are,
Figure 507667DEST_PATH_IMAGE002
representing the plane coordinates of the binocular vision system vision measurement,
Figure 773564DEST_PATH_IMAGE003
representing the planar coordinates of the UWB measurements,
Figure 851241DEST_PATH_IMAGE004
indicating the position measurement error of the binocular vision system,
Figure 244308DEST_PATH_IMAGE005
indicating the error in the UWB position measurement,
Figure 458251DEST_PATH_IMAGE006
plane coordinates representing binocular vision system measurements and UWB measurementsThe angle of deflection therebetween is such that,
Figure 895049DEST_PATH_IMAGE007
representing the intermediate matrix.
4. A high-precision spatial positioning method for an electric power work site according to claim 3, wherein the UWB model is constructed by:
establishing an improved robust EKF model, and adopting the improved robust EKF model as a standard model of the UWB model;
judging whether the improved robust EKF model has gross errors or not by adopting a statistical method;
if gross error exists, the robust EKF model is invoked as the improved robust EKF model.
5. The high-precision space positioning method for the electric power operation site according to claim 1, wherein the positioning method of the Beidou auxiliary visual positioning technology comprises the following steps:
acquiring an interested region based on a Kalman filtering model and a working boundary line prediction position;
carrying out image edge detection on the region of interest to obtain a positioning region;
constructing a multi-view vision measurement model through a multi-view vision sensor, and constructing a multi-view vision coordinate in a positioning area through the multi-view vision measurement model;
optimizing the multi-view visual coordinate constructed in the positioning area through the multi-view visual measurement model based on a weighted LM algorithm to obtain an optimized multi-view visual coordinate;
and based on the Beidou positioning technology, the optimized multi-view visual coordinate is converted into the coordinate of a global positioning system.
6. The method for high-precision spatial location of an electric power operation field according to claim 5, wherein based on the Kalman filtering model, the method for dynamically acquiring the region of interest by a plurality of vision sensors comprises the following steps:
predicting the current straight line position based on Kalman filtering to obtain a first dynamic region of interest;
and predicting a column number of the operation boundary line in the image coordinate based on a projection method, and obtaining a second dynamic region of interest by taking the column number as a reference and combining the current camera calling condition.
7. The high-precision space positioning method for the electric power operation field according to claim 5, wherein based on the weighted LM algorithm, the method for optimizing the multi-view vision coordinates constructed in the positioning area by the multi-view vision measurement model comprises the following steps:
carrying out nonlinear optimization on image coordinates obtained after the object points are transformed to the ith multi-view vision sensor by adopting the minimized reprojection error;
normalizing the distance from the multi-view vision sensor to the object, and converting the reciprocal of the distance after normalization into a weighting factor;
and constructing an objective function through the weighting factors, and substituting the multi-view visual coordinates into the objective function to calculate to obtain optimized multi-view visual coordinates.
8. High accuracy spatial localization system of electric power operation scene, its characterized in that includes:
the system comprises a UWB auxiliary video positioning module, a UWB auxiliary video positioning module and a video processing module, wherein the UWB auxiliary video positioning module is used for acquiring the positioning of a moving target of indoor power operation, and comprises an initialization unit for initializing an environmental image and IMU data, a vision/inertia combination unit for acquiring a vision coordinate and an ultra-wideband unit for acquiring the UWB coordinate;
the Beidou auxiliary visual positioning module is used for acquiring the positioning of an outdoor electric power operation moving target, and comprises a dynamic region of interest module used for reducing the extraction of environmental factors to an operation boundary line, a multi-view visual module used for constructing multi-view visual region coordinates and a Beidou positioning module used for converting the multi-view visual region coordinates into global positioning system coordinates.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the method of high-precision spatial localization of an electric power job site according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out a method for high-precision spatial localization of an electric power work site according to any one of claims 1 to 7.
CN202211524412.6A 2022-12-01 2022-12-01 High-precision space positioning method, system, equipment and medium for electric power operation site Pending CN115542362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211524412.6A CN115542362A (en) 2022-12-01 2022-12-01 High-precision space positioning method, system, equipment and medium for electric power operation site

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211524412.6A CN115542362A (en) 2022-12-01 2022-12-01 High-precision space positioning method, system, equipment and medium for electric power operation site

Publications (1)

Publication Number Publication Date
CN115542362A true CN115542362A (en) 2022-12-30

Family

ID=84722565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211524412.6A Pending CN115542362A (en) 2022-12-01 2022-12-01 High-precision space positioning method, system, equipment and medium for electric power operation site

Country Status (1)

Country Link
CN (1) CN115542362A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102233A (en) * 2024-04-22 2024-05-28 南方电网调峰调频发电有限公司 Object positioning method and device for multiple scenes of pumped storage foundation engineering

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077539A (en) * 2013-01-23 2013-05-01 上海交通大学 Moving object tracking method under complicated background and sheltering condition
CN104715252A (en) * 2015-03-12 2015-06-17 电子科技大学 License plate character segmentation method with combination of dynamic template and pixel points
CN106647784A (en) * 2016-11-15 2017-05-10 天津大学 Miniaturized unmanned aerial vehicle positioning and navigation method based on Beidou navigation system
CN107133563A (en) * 2017-03-17 2017-09-05 深圳市能信安科技股份有限公司 A kind of video analytic system and method based on police field
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN108012325A (en) * 2017-10-30 2018-05-08 上海神添实业有限公司 A kind of navigation locating method based on UWB and binocular vision
CN108549771A (en) * 2018-04-13 2018-09-18 山东天星北斗信息科技有限公司 A kind of excavator auxiliary construction system and method
CN109002744A (en) * 2017-06-06 2018-12-14 中兴通讯股份有限公司 Image-recognizing method, device and video monitoring equipment
CN109489629A (en) * 2018-12-07 2019-03-19 国网四川省电力公司电力科学研究院 A kind of safety monitoring method of electric power line pole tower
CN111401364A (en) * 2020-03-18 2020-07-10 深圳市市政设计研究院有限公司 License plate positioning algorithm based on combination of color features and template matching
CN111476233A (en) * 2020-03-12 2020-07-31 广州杰赛科技股份有限公司 License plate number positioning method and device
CN111508006A (en) * 2020-04-23 2020-08-07 南开大学 Moving target synchronous detection, identification and tracking method based on deep learning
CN112101343A (en) * 2020-08-17 2020-12-18 广东工业大学 License plate character segmentation and recognition method
CN112465401A (en) * 2020-12-17 2021-03-09 国网四川省电力公司电力科学研究院 Electric power operation safety control system based on multi-dimensional information fusion and control method thereof
CN112560745A (en) * 2020-12-23 2021-03-26 南方电网电力科技股份有限公司 Method for discriminating personnel on electric power operation site and related device
CN113392839A (en) * 2021-05-18 2021-09-14 浙江大华技术股份有限公司 Method and device for recognizing license plate of non-motor vehicle, computer equipment and storage medium
CN114092875A (en) * 2021-11-01 2022-02-25 南方电网深圳数字电网研究院有限公司 Operation site safety supervision method and device based on machine learning
CN114723824A (en) * 2022-04-01 2022-07-08 浙江工业大学 Indoor single positioning method based on binocular camera and ultra wide band fusion
CN115326053A (en) * 2022-08-18 2022-11-11 华南理工大学 Mobile robot multi-sensor fusion positioning method based on double-layer vision
CN115412846A (en) * 2022-08-31 2022-11-29 常熟理工学院 Underground multi-scene identity detection positioning method, system and storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077539A (en) * 2013-01-23 2013-05-01 上海交通大学 Moving object tracking method under complicated background and sheltering condition
CN104715252A (en) * 2015-03-12 2015-06-17 电子科技大学 License plate character segmentation method with combination of dynamic template and pixel points
CN107967473A (en) * 2016-10-20 2018-04-27 南京万云信息技术有限公司 Based on picture and text identification and semantic robot autonomous localization and navigation
CN106647784A (en) * 2016-11-15 2017-05-10 天津大学 Miniaturized unmanned aerial vehicle positioning and navigation method based on Beidou navigation system
CN107133563A (en) * 2017-03-17 2017-09-05 深圳市能信安科技股份有限公司 A kind of video analytic system and method based on police field
CN109002744A (en) * 2017-06-06 2018-12-14 中兴通讯股份有限公司 Image-recognizing method, device and video monitoring equipment
CN108012325A (en) * 2017-10-30 2018-05-08 上海神添实业有限公司 A kind of navigation locating method based on UWB and binocular vision
CN108549771A (en) * 2018-04-13 2018-09-18 山东天星北斗信息科技有限公司 A kind of excavator auxiliary construction system and method
CN109489629A (en) * 2018-12-07 2019-03-19 国网四川省电力公司电力科学研究院 A kind of safety monitoring method of electric power line pole tower
CN111476233A (en) * 2020-03-12 2020-07-31 广州杰赛科技股份有限公司 License plate number positioning method and device
CN111401364A (en) * 2020-03-18 2020-07-10 深圳市市政设计研究院有限公司 License plate positioning algorithm based on combination of color features and template matching
CN111508006A (en) * 2020-04-23 2020-08-07 南开大学 Moving target synchronous detection, identification and tracking method based on deep learning
CN112101343A (en) * 2020-08-17 2020-12-18 广东工业大学 License plate character segmentation and recognition method
CN112465401A (en) * 2020-12-17 2021-03-09 国网四川省电力公司电力科学研究院 Electric power operation safety control system based on multi-dimensional information fusion and control method thereof
CN112560745A (en) * 2020-12-23 2021-03-26 南方电网电力科技股份有限公司 Method for discriminating personnel on electric power operation site and related device
CN113392839A (en) * 2021-05-18 2021-09-14 浙江大华技术股份有限公司 Method and device for recognizing license plate of non-motor vehicle, computer equipment and storage medium
CN114092875A (en) * 2021-11-01 2022-02-25 南方电网深圳数字电网研究院有限公司 Operation site safety supervision method and device based on machine learning
CN114723824A (en) * 2022-04-01 2022-07-08 浙江工业大学 Indoor single positioning method based on binocular camera and ultra wide band fusion
CN115326053A (en) * 2022-08-18 2022-11-11 华南理工大学 Mobile robot multi-sensor fusion positioning method based on double-layer vision
CN115412846A (en) * 2022-08-31 2022-11-29 常熟理工学院 Underground multi-scene identity detection positioning method, system and storage medium

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
刘德辉: ""基于双目视觉里程计辅助的UWB/GNSS室内外定位算法研究"" *
刘德辉: ""基于双目视觉里程计辅助的UWB/GNSS室内外定位算法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
刘飞: ""多传感器融合的高精度无缝定位模型与方法研究"" *
周爱国 等: ""基于加权 Levenberg-Marquardt的多目视觉同名物点定位算法"" *
周爱国 等: ""基于加权 Levenberg-Marquardt的多目视觉同名物点定位算法"", 《激光与光电子学进展》 *
张明军 等: ""一种基于机器学习的车牌识别***的设计"" *
李鹏 等: ""改进自适应抗差容积卡尔曼滤波多源室内定位"", 《导航定位与授时》 *
杨博: ""视觉/惯性/超宽带组合定位***关键技术研究"" *
王璇: ""基于视觉的电力***输电线检测与跟踪"" *
王璇: ""基于视觉的电力***输电线检测与跟踪"", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
申炳琦 等: ""移动机器人 UWB 与 VIO 组合室内定位算法"" *
申炳琦 等: ""移动机器人 UWB 与 VIO 组合室内定位算法"", 《计算机应用》 *
郑顾平 等: ""基于机器学习的多车牌识别算法应用研究"" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102233A (en) * 2024-04-22 2024-05-28 南方电网调峰调频发电有限公司 Object positioning method and device for multiple scenes of pumped storage foundation engineering

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
Dhiman et al. Pothole detection using computer vision and learning
CN110531759B (en) Robot exploration path generation method and device, computer equipment and storage medium
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
WO2021021862A1 (en) Mapping and localization system for autonomous vehicles
CN114140761A (en) Point cloud registration method and device, computer equipment and storage medium
CN111998862A (en) Dense binocular SLAM method based on BNN
CN114463932B (en) Non-contact construction safety distance active dynamic identification early warning system and method
CN115542362A (en) High-precision space positioning method, system, equipment and medium for electric power operation site
Yu et al. Accurate and robust visual localization system in large-scale appearance-changing environments
Wang et al. 3D-LIDAR based branch estimation and intersection location for autonomous vehicles
CN114721008A (en) Obstacle detection method and device, computer equipment and storage medium
Kampker et al. Concept study for vehicle self-localization using neural networks for detection of pole-like landmarks
CN114549549A (en) Dynamic target modeling tracking method based on instance segmentation in dynamic environment
CN114049362A (en) Transform-based point cloud instance segmentation method
Sun et al. Real-time and fast RGB-D based people detection and tracking for service robots
Suzuki et al. SLAM using ICP and graph optimization considering physical properties of environment
CN116188417A (en) Slit detection and three-dimensional positioning method based on SLAM and image processing
CN115497086A (en) 3D scene flow estimation method based on fine-grained identification in automatic driving
CN115761265A (en) Method and device for extracting substation equipment in laser radar point cloud
Kim et al. 3D pose estimation and localization of construction equipment from single camera images by virtual model integration
CN112146647B (en) Binocular vision positioning method and chip for ground texture
Zhang et al. Vision-based uav positioning method assisted by relative attitude classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221230