CN116452776B - Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system - Google Patents

Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system Download PDF

Info

Publication number
CN116452776B
CN116452776B CN202310722014.3A CN202310722014A CN116452776B CN 116452776 B CN116452776 B CN 116452776B CN 202310722014 A CN202310722014 A CN 202310722014A CN 116452776 B CN116452776 B CN 116452776B
Authority
CN
China
Prior art keywords
group
camera
scene
points
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310722014.3A
Other languages
Chinese (zh)
Other versions
CN116452776A (en
Inventor
袁慧宏
翁时乐
陈梁金
陈家乾
刘俊
孙琦
席俞佳
周丽华
张峰良
周逸淳
周禹航
赵志修
齐蓓
胡文博
叶承晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUZHOU ELECTRIC POWER DESIGN INSTITUTE CO LTD
Wuhan Energy Efficiency Evaluation Co Ltd Of State Grid Electric Power Research Institute
Huzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
HUZHOU ELECTRIC POWER DESIGN INSTITUTE CO LTD
Wuhan Energy Efficiency Evaluation Co Ltd Of State Grid Electric Power Research Institute
Huzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUZHOU ELECTRIC POWER DESIGN INSTITUTE CO LTD, Wuhan Energy Efficiency Evaluation Co Ltd Of State Grid Electric Power Research Institute, Huzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical HUZHOU ELECTRIC POWER DESIGN INSTITUTE CO LTD
Priority to CN202310722014.3A priority Critical patent/CN116452776B/en
Publication of CN116452776A publication Critical patent/CN116452776A/en
Application granted granted Critical
Publication of CN116452776B publication Critical patent/CN116452776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Economics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system, which comprises the following steps: acquiring scene information of a target scene, and determining a layout scheme of a camera group; receiving an initial image, extracting a background pattern, deleting the background pattern, and extracting a key frame image group; based on the coordinates of the camera group and the key frame image group, constructing a multi-camera system space perception model, determining the coordinates of space feature points through binocular cameras in the camera group, and estimating depth space data of the key frame image group to determine a depth map group; extracting feature points from the depth map set, iteratively matching the feature points until the feature points reach the nearest points to form feature tracks corresponding to the feature points, and reconstructing a curved surface by a least square method to generate a three-dimensional scene corresponding to the target scene. By adopting the method, the synchronous restoration of the scene reconstruction in the construction process of the electrotransformation station can be realized by utilizing the characteristics of panoramic vision and stereoscopic perception, and the accuracy of the restoration result is improved.

Description

Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system
Technical Field
The invention relates to the technical field of virtual reality restoration, in particular to a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system.
Background
With the rapid development of computer vision technology in the field of virtual reality, vision-based synchronous positioning and mapping technology plays an increasingly important role in the field of scene restoration at the construction stage of a transformer substation, and in particular, the formation of a digital twin system in the construction process of the transformer substation needs to obtain technical support for synchronous mapping and drawing.
However, the conventional monocular camera-based synchronous positioning and mapping method is only suitable for a simple indoor environment or an urban environment with obvious structural characteristics. In a complex construction environment, the conditions encountered by the synchronous positioning and drawing system may be complicated due to the problems of direct sunlight, shielding of foreground objects, rough roads, sensor faults, sparsity of stable trackable textures and the like, so that the reduction result is not accurate enough, and therefore, a scene reconstruction scheme is needed at present to accurately and synchronously reduce scene reconstruction in the transformer substation construction process.
Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the invention provides a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system.
The embodiment of the invention provides a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system, which comprises the following steps:
acquiring scene information of a target scene, determining layout constraint conditions of camera groups based on the scene information, and determining a layout scheme of the corresponding camera groups by combining a genetic algorithm;
receiving an initial image shot by the camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule;
correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line intersection principle by using binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group;
extracting feature points from the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest point, and forming feature tracks corresponding to the feature points based on an iterative matching process;
and carrying out curved surface reconstruction by a mobile least square method based on the characteristic points and the corresponding characteristic tracks, and generating a three-dimensional scene corresponding to the target scene.
In one embodiment, the method further comprises:
acquiring background physical information and background constraint conditions of a camera group in layout, and determining preliminary decision parameters of the camera group in layout based on the background physical information and the background constraint conditions;
and calculating the total coverage rate of the corresponding camera group based on the preliminary decision parameters and the scene weight of the target scene by combining a genetic algorithm, and determining a corresponding layout scheme based on the total coverage rate.
In one embodiment, the method further comprises:
extracting a vegetation region and a sky region in the initial image, calculating a corresponding green view index based on the vegetation region, and calculating a corresponding sky view index based on the sky region;
and roughly extracting the initial image according to the pixel quantity corresponding to the green view index and the sky view index.
In one embodiment, the frame number extraction rule includes:
the frame number extraction interval of extracting key frames from the image frames is larger than a preset threshold;
the mapping line of the extracted key frame is kept idle, and the interval from the key frame to the last key frame exceeds a preset frame number;
the number of points matched with the RGB-D image points corresponding to the extracted key frame is smaller than that of the points matched with the RGB-D image points corresponding to the previous key frame;
the number of map points which are not in the corresponding common view in the extracted key frames and the image frames is larger than the preset number;
the distance between the extracted key frame and the position piece corresponding to the last key frame is greater than the baseline length.
In one embodiment, the method further comprises:
based on the multi-camera system space perception model, determining the corresponding relation between three-dimensional points in space and two-dimensional points in a pixel coordinate system, and determining the corresponding relation between the characteristic points in the target scene and the three-dimensional scene by combining the space characteristic point coordinates;
and regressing the corresponding dense depth map by taking the corresponding key frame image group as a reference image according to the corresponding relation between the characteristic points in the target scene and the three-dimensional scene, and determining the corresponding depth map group through the dense depth map.
In one embodiment, the method further comprises:
preprocessing the key frame image group, wherein the preprocessing comprises the following steps: affine regularization of images and image denoising.
In one embodiment, the method further comprises:
and calculating the distance ratio between the feature points corresponding to the last matching in each matching, and eliminating the corresponding feature points when the distance ratio is smaller than a preset ratio.
The embodiment of the invention provides a low-carbon substation scene reconstruction system based on a visual synchronous positioning and mapping system, which comprises the following steps:
the layout module is used for acquiring scene information of a target scene, determining layout constraint conditions of the camera group based on the scene information, and determining a layout scheme of the corresponding camera group by combining a genetic algorithm;
the extraction module is used for receiving the initial image shot by the camera group, extracting the background pattern in the initial image, deleting the background pattern, acquiring the image frame of the deleted initial image, and extracting a key frame image group by combining with a preset frame number extraction rule;
the depth image group module is used for correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line intersection principle through binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth image group;
the iterative matching module is used for extracting the characteristic points of the depth map group based on an equal proportion characteristic transformation method, carrying out iterative matching on the characteristic points through an approximate nearest neighbor algorithm until the characteristic points reach the nearest point, and forming a characteristic track corresponding to the characteristic points based on an iterative matching process;
and the curved surface reconstruction module is used for reconstructing the curved surface by a mobile least square method based on the characteristic points and the corresponding characteristic tracks to generate a three-dimensional scene corresponding to the target scene.
The embodiment of the invention provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the low-carbon substation scene reconstruction method based on the visual synchronous positioning and mapping system when executing the program.
The embodiment of the invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the steps of the low-carbon substation scene reconstruction method based on the visual synchronous positioning and mapping system.
The embodiment of the invention provides a low-carbon substation scene reconstruction method and system based on a visual synchronous positioning and mapping system, which are used for acquiring scene information of a target scene, determining layout constraint conditions of camera groups based on the scene information, and determining a layout scheme of the corresponding camera groups by combining a genetic algorithm; receiving an initial image shot by a camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule; correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line-line intersection principle through binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group; extracting feature points from the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest points, forming feature tracks corresponding to the feature points based on the iterative matching process, and reconstructing a curved surface based on the feature points and the corresponding feature tracks through a moving least square method to generate a three-dimensional scene corresponding to the target scene. Therefore, the synchronous restoration of scene reconstruction in the construction process of the electrotransformation station can be realized by utilizing the characteristics of panoramic vision and stereoscopic perception, and the accuracy of the restoration result is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system in an embodiment of the invention;
FIG. 2 is a flow chart of a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system according to another embodiment of the invention;
fig. 3 is a block diagram of a low-carbon substation scene reconstruction system based on a visual synchronous positioning and mapping system in an embodiment of the invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system provided by an embodiment of the present invention, as shown in fig. 1, the embodiment of the present invention provides a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system, including:
step S101, obtaining scene information of a target scene, determining layout constraint conditions of camera groups based on the scene information, and determining a layout scheme of the corresponding camera groups by combining a genetic algorithm.
In particular, scene information of a target scene is acquired, wherein the scene information can comprise physical information in a field, such as factors influencing the arrangement azimuth of a camera; constraint information, such as factors affecting the camera placement surface, etc., determines a scope of preliminary decision placement of the camera group based on multiparty factors, and then determines, based on a genetic algorithm, at what placement location the camera group has the largest coverage within the scope of preliminary decision placement, thereby determining a layout scheme of the camera group, and the detailed steps may include:
1. determining factors affecting camera placement orientation
Taking into account the height of the placement surface, the surface (e.g., ceiling) of the placement camera;
considering the height of the monitored area (i.e. the upper limit of the average human height), the important area that needs to be covered, the study divides the important area into: all inlets and outlets of the electrotransformation station;
a common area (such as a corridor, particularly a region having intersections between multiple corridors); elevator, stair and escalator positions;
the existence of private areas such as command centers, monitoring rooms.
2. Determining the presence of constraints located on a camera placement surface
Acquiring corresponding data by BIM, such as geometric and non-geometric attributes (e.g., type, length, and location) in a target area may be used to automatically specify constraints on the surface on which the camera is placed;
geometric constraints attached to the ceiling (e.g., the ceiling);
operational limits (e.g., vibration generated by the substation components) approaching the camera position;
logic constraints (e.g., camera facing wall or reflective surface);
legal constraints (private areas, etc.);
3. determining decision variables affecting camera placement
The position (i.e., points on the grid system) and orientation of the camera are identified by determining position variables X, Y and Z-coordinates and orientation variables (pan and tilt angles).
The camera to be placed is assigned to a genetic algorithm to generate candidate solutions (i.e., the position and orientation of the camera), which is placed on the BIM model mesh according to the solution X, Y, Z values.
There are five variables per camera pose (O), but some of them will be fixed depending on the situation in which the camera is placed.
4. Optimizing camera layout using genetic algorithm in integrated simulation
(1) Acquiring data from the BIM;
(2) Adjusting camera characteristics, i.e. field of view, camera height and crop plane (far and near points)
(3) Automatically placing a camera according to the data of each solution in the population;
(4) Generating a unit on a target surface (i.e., floor);
(5) The calculated correlation index is sent to a genetic function,
for calculating the region a i The number of weighted units in (a):
region a i Medium weighted coverage unitPass through region a i All cover units->Is calculated by (1):
for calculating the total coverage of the camera
Then determine the total coverage of the camera group:
the output is sent to a genetic algorithm to evaluate a first objective function that is intended to maximize the coverage of the camera by applying the following equation:
wherein, the liquid crystal display device comprises a liquid crystal display device,to measure region a i Weight of importance, weight of->For the cells j, j=1, 2, …, n, < >>For the importance values assigned to all cells in region i, i=1, 2, …, m, +.>Is area a i Covering units v, v1,2, …, n in (a) ,/>For camera x, x=1: q, < >>Is area a i The weighted covering unit of (1), L is the maximum value of the length of the camera placing surface, +.>And->Minimum and maximum of camera height, respectively,/-for each camera height>Is the maximum value of the placement surface width.
Step S102, receiving an initial image shot by the camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule.
Specifically, an initial image shot by the camera group is received, a background pattern except for a target pattern in the initial image is extracted, and the background pattern is deleted, wherein the background pattern can comprise vegetation, sky, nonsensical garbage and the like in the background, and the detailed deleting step can comprise:
the ratio of the green area to the total area in the depth image is calculated and defined as green view index GVI, and the calculation method is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for the number of green pixels in the view image, +.>For the number of all pixels, then a rough extraction of green pixels is performed,
for sky regions, sky regions in street view images are extracted using sky open index SOI, which is defined as the scale of viewing cone sky from certain observation points, as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,r is the number of regions classified as sky in the segmented image i In the ith sky areaN is the total number of pixels in the image.
Deleting the background pattern to obtain an image frame of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule, wherein the preset rule can be frame number extraction frequency, and the corresponding matching point in the key frame image can be used as a reference, for example, the method can comprise the following steps:
the frame number extraction interval for extracting key frames from the image frames is larger than a preset threshold;
the mapping line of the extracted key frame is kept idle, and the interval from the key frame to the last key frame exceeds a preset frame number;
the number of points matched with the RGB-D image points corresponding to the extracted key frame is smaller than that of the points matched with the RGB-D image points corresponding to the previous key frame;
the number of map points which are not in the corresponding common view in the extracted key frames and the image frames is larger than the preset number;
the distance between the extracted key frame and the position piece corresponding to the last key frame is greater than the baseline length.
In addition, after the key frame image group is extracted, data grouping can be performed, including using other source photographs as query photographs and using the key frame image group as a reference data set and using other query photographs as references, so that grouping selection of images is performed, and the accuracy in the subsequent steps is improved.
Step S103, correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line intersection principle through the binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group.
Specifically, based on the coordinates of the camera group and the coordinates of the key frame image group, three steps of constructing a spatial perception model of the multi-camera system, determining the coordinates of spatial feature points, and estimating depth space data are performed to determine a corresponding depth map group, and the detailed three steps may include:
1. construction of a spatial perception model of a multiple camera system
In a camera group system, a world coordinate system, a camera coordinate system, an imaging coordinate system and a pixel coordinate system need to be subjected to data correlation, and a spatial perception model of the multi-camera system is derived based on relevant parameters of the camera imaging principle of table 1, and the model displays three-dimensional points P in space i And a two-dimensional point m in a pixel coordinate system i Relationship between:
wherein, the liquid crystal display device comprises a liquid crystal display device,for pre-calibration, transform matrix->(including rotation matrix and translation vector) is:
wherein, the liquid crystal display device comprises a liquid crystal display device,for translation vectors along the x, y and z axes of the body coordinate system,is a rotation matrix of rotation angles along the coordinate axes.
2. Camera pose estimation for binocular camera with line-to-line intersection
The binocular camera can reconstruct space points by utilizing line-line intersection, and the three-dimensional space points [ Xc, yc, zc ] under the camera group coordinate system are compared with the left camera, the rotation matrix is an identity matrix, and the translation matrix is 0; the rotation matrix, R from left camera to right camera in binocular, is the same as the translation matrix with respect to the right camera.
The spatial feature points obtained by utilizing the collineation equation are the spatial feature point coordinates under the RGB-D camera set coordinate system:
the above X, Y, C is actually substituted for Xc, yc, zc,the representation is made. For the left camera, only r0, r4, r8 are not 0, and the rest are 0.
R substituted into collineation equation with respect to right camera T For R between binocular T
Another equation for the right camera is:
according to the four equations, a matrix is constructed, the matrix is constructed in the form of ax=0 or ax=b, normalized image points or non-normalized image points (the normalization refers to the inverse of internal references) are substituted, and finally the position and the posture of the sensor are estimated, so that the corresponding points between the target scene and the sensor projection can be found according to the types (two-dimensional-two-dimensional, two-dimensional-three-dimensional and three-dimensional-three-dimensional) of the point pairs.
3. Depth space data estimation based on dense mixed circulation multi-view stereoscopic network
Given a set of multi-view images and corresponding calibrated camera parameters, the depth map for each key image is next estimated. Firstly, each input image is used as a reference image, and is input into an effective dense mixed circulation multi-view stereoscopic network DH-RMVSNet together with a plurality of adjacent images, and a corresponding dense depth map is regressed; a dynamic consistency check algorithm is then used to filter all estimated depth maps of the multi-view image, and more accurate and reliable depth values are obtained by exploiting the geometric consistency of all neighboring views.
Step S104, extracting feature points of the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest point, and forming feature tracks corresponding to the feature points based on an iterative matching process.
Specifically, feature point extraction is performed on the depth map group based on an equal proportion feature transformation method, such as constructing a corresponding image pyramid, feature points are detected at each layer of the pyramid, scale invariance of the feature points is achieved, and then the depth map group is based on the conical proportion of the pyramid, namely the area s of the i th layer pyramid i And the number of feature points n i Proportional to the ratio. Let the area of layer 0 image be s 0 The scaling factor is alpha (0 < alpha < 1), the total number of features is N, the total area of the pyramid is the sum S of the areas of each layer, and the number Ns of feature points expected by each unit scaling factor is:
the direction of the characteristic point is determined by using a gray centroid method and a moment method, so that rotation invariance of the characteristic point is realized, a gray expression I (x, y) of the image block B is defined, p, q= {0,1}, and the moment of the image block B is as follows:
by moments of image blocks BThe centroid of B can be found:
connecting the geometric center O and the centroid C of the image block to form a direction,/>The angle of (2) is the angle of the feature point:
based on the known image features, performing iterative matching on the feature points by using an approximate nearest neighbor algorithm, determining corresponding feature points, and constructing corresponding feature point tracks, wherein the feature point tracks are used for eliminating the probability of mismatching among the image feature points, wherein the approximate nearest neighbor algorithm can perform feature matching on all depth map groups through a binary tree principle, and after matching, tracking feature matching points appearing in a plurality of photos to form tracks.
In addition, in the actual matching process, because the image is particularly easy to be subjected to pixel migration caused by light and noise, and the situation of image mismatching can also occur, an improved progressive sampling consensus algorithm based on a random sampling algorithm is adopted, so that a quality factor q is defined to measure the quality of the matched point pairs, the quality is arranged in descending order according to q values, and the point pairs with higher quality are taken to obtain a homography matrix. The similarity degree of the feature points is expressed by hamming distance in the image feature point descriptor matching process. With minimum distanceAnd next smallest distance->The ratio beta of the (2) represents the matching quality of the characteristic points, and the relation between the q value and the ratio beta is as follows:
when q reaches a certain threshold value, the quality of the matching point is not in accordance with the standard and needs to be eliminated.
In addition, the preprocessing is performed on the key frame image group, and the preprocessing includes: affine regularization of images and image denoising.
Step S105, performing surface reconstruction by a mobile least square method based on the feature points and the corresponding feature trajectories, and generating a three-dimensional scene corresponding to the target scene.
Specifically, a two-dimensional vector contour of a corresponding target scene is determined based on feature points, then point cloud inversion is carried out on divided virtual elevation, then feature point cloud matching and directed three-dimensional triangularization are carried out based on a moving least square method to realize three-dimensional curved surface reconstruction, and the method comprises the following steps: based on the elevation point cloud inversion of the vector outline corresponding to the feature points, corresponding straight line features are determined, elevation inversion is performed based on the straight line features, the real proportion of the corresponding building is determined, finally, a three-dimensional curved surface is constructed through a least square method, grid reconstruction is performed through directed three-dimensional triangularization and point cloud triangularization, and a corresponding three-dimensional scene is determined.
The embodiment of the invention provides a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system, which comprises the steps of obtaining scene information of a target scene, determining layout constraint conditions of camera groups based on the scene information, and determining a layout scheme of the corresponding camera groups by combining a genetic algorithm; receiving an initial image shot by a camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule; correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line-line intersection principle through binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group; extracting feature points from the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest points, forming feature tracks corresponding to the feature points based on the iterative matching process, and reconstructing a curved surface based on the feature points and the corresponding feature tracks through a moving least square method to generate a three-dimensional scene corresponding to the target scene. Therefore, the synchronous restoration of scene reconstruction in the construction process of the electrotransformation station can be realized by utilizing the characteristics of panoramic vision and stereoscopic perception, and the accuracy of the restoration result is improved.
In another embodiment, a flow chart of a low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system may be as shown in fig. 2, where first, a layout of a camera group is determined, then, depth estimation is performed on an image of the laid camera group, image processing is performed, feature points are extracted, point cloud matching and track construction are performed, and finally, reconstruction of a three-dimensional curved surface is performed, so as to determine a three-dimensional scene corresponding to a target scene.
Fig. 3 is a view synchronization positioning and mapping system based low-carbon substation scene reconstruction system provided by the embodiment of the invention, which includes: a layout module S201, an extraction module S202, a depth map set module S203, an iteration matching module S204, and a curved surface reconstruction module S205, wherein:
the layout module S201 is configured to obtain scene information of a target scene, determine layout constraint conditions of a camera group based on the scene information, and determine a layout scheme of the corresponding camera group in combination with a genetic algorithm.
The extraction module S202 is configured to receive an initial image captured by the camera group, extract a background pattern in the initial image, delete the background pattern, obtain an image frame of the deleted initial image, and extract a key frame image group in combination with a preset frame number extraction rule.
And the depth image group module S203 is used for correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line intersection principle through the binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth image group.
And the iteration matching module S204 is used for extracting the characteristic points of the depth map set based on an equal-proportion characteristic transformation method, carrying out iteration matching on the characteristic points through an approximate nearest neighbor algorithm until the characteristic points reach the nearest point, and forming a characteristic track corresponding to the characteristic points based on an iteration matching process.
And the curved surface reconstruction module S205 is used for reconstructing a curved surface by a mobile least square method based on the characteristic points and the corresponding characteristic tracks to generate a three-dimensional scene corresponding to the target scene.
The specific limitation of the low-carbon substation scene reconstruction system based on the visual synchronous positioning and mapping system can be referred to above, and the limitation of the low-carbon substation scene reconstruction method based on the visual synchronous positioning and mapping system is not repeated here. All or part of each module in the low-carbon substation scene reconstruction system based on the visual synchronous positioning and mapping system can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 4 illustrates a physical schematic diagram of an electronic device, as shown in fig. 4, which may include: a processor (processor) 301, a memory (memory) 302, a communication interface (Communications Interface) 303 and a communication bus 304, wherein the processor 301, the memory 302 and the communication interface 303 perform communication with each other through the communication bus 304. The processor 301 may call logic instructions in the memory 302 to perform the following method: acquiring scene information of a target scene, determining layout constraint conditions of the camera group based on the scene information, and determining a layout scheme of the corresponding camera group by combining a genetic algorithm; receiving an initial image shot by a camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule; correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line-line intersection principle through binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group; extracting feature points from the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest points, forming feature tracks corresponding to the feature points based on the iterative matching process, and reconstructing a curved surface based on the feature points and the corresponding feature tracks through a moving least square method to generate a three-dimensional scene corresponding to the target scene.
Further, the logic instructions in memory 302 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present invention further provide a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the transmission method provided in the above embodiments, for example, including: acquiring scene information of a target scene, determining layout constraint conditions of the camera group based on the scene information, and determining a layout scheme of the corresponding camera group by combining a genetic algorithm; receiving an initial image shot by a camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule; correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line-line intersection principle through binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group; extracting feature points from the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest points, forming feature tracks corresponding to the feature points based on the iterative matching process, and reconstructing a curved surface based on the feature points and the corresponding feature tracks through a moving least square method to generate a three-dimensional scene corresponding to the target scene.
The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A low-carbon substation scene reconstruction method based on a visual synchronous positioning and mapping system is characterized by comprising the following steps:
acquiring scene information of a target scene, determining layout constraint conditions of camera groups based on the scene information, and determining a layout scheme of the corresponding camera groups by combining a genetic algorithm;
receiving an initial image shot by the camera group, extracting background patterns in the initial image, deleting the background patterns, acquiring image frames of the deleted initial image, and extracting a key frame image group by combining a preset frame number extraction rule;
correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line intersection principle by using binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth map group;
extracting feature points from the depth map set based on an equal proportion feature transformation method, performing iterative matching on the feature points through an approximate nearest neighbor algorithm until the feature points reach the nearest point, and forming feature tracks corresponding to the feature points based on an iterative matching process;
based on the feature points and the corresponding feature tracks, performing curved surface reconstruction by a mobile least square method to generate a three-dimensional scene corresponding to the target scene;
the step of estimating depth space data of the key frame image group by combining the space perception model of the multi-camera system and the space feature point coordinates to obtain a corresponding depth image group comprises the following steps:
based on the multi-camera system space perception model, determining the corresponding relation between three-dimensional points in space and two-dimensional points in a pixel coordinate system, and determining the corresponding relation between the characteristic points in the target scene and the three-dimensional scene by combining the space characteristic point coordinates; and regressing the corresponding dense depth map by taking the corresponding key frame image group as a reference image according to the corresponding relation between the characteristic points in the target scene and the three-dimensional scene, and determining the corresponding depth map group through the dense depth map.
2. The method for reconstructing a scene of a low-carbon substation based on a visual synchronous positioning and mapping system according to claim 1, wherein determining layout constraint conditions of camera groups based on the scene information and combining genetic algorithm to determine a layout scheme of the corresponding camera groups comprises:
acquiring background physical information and background constraint conditions of a camera group in layout, and determining preliminary decision parameters of the camera group in layout based on the background physical information and the background constraint conditions;
and calculating the total coverage rate of the corresponding camera group based on the preliminary decision parameters and the scene weight of the target scene by combining a genetic algorithm, and determining a corresponding layout scheme based on the total coverage rate.
3. The method for reconstructing a low-carbon substation scene based on a visual synchronous positioning and mapping system according to claim 1, wherein the extracting and deleting the background pattern in the initial image comprises:
extracting a vegetation region and a sky region in the initial image, calculating a corresponding green view index based on the vegetation region, and calculating a corresponding sky view index based on the sky region;
and roughly extracting the initial image according to the pixel quantity corresponding to the green view index and the sky view index.
4. The method for reconstructing a low-carbon substation scene based on a visual synchronous positioning and mapping system according to claim 1, wherein the frame number extraction rule comprises:
the frame number extraction interval of extracting key frames from the image frames is larger than a preset threshold;
the mapping line of the extracted key frame is kept idle, and the interval from the key frame to the last key frame exceeds a preset frame number; the number of points matched with the RGB-D image points corresponding to the extracted key frame is smaller than that of the points matched with the RGB-D image points corresponding to the previous key frame;
the number of map points which are not in the corresponding common view in the extracted key frames and the image frames is larger than the preset number;
the distance between the extracted key frame and the position piece corresponding to the last key frame is greater than the baseline length.
5. The method for reconstructing a low-carbon substation scene based on a visual synchronous positioning and mapping system according to claim 1, wherein after the extracting the keyframe image group, the method further comprises:
preprocessing the key frame image group, wherein the preprocessing comprises the following steps: affine regularization of images and image denoising.
6. The method for reconstructing a low-carbon substation scene based on a visual synchronous positioning and mapping system according to claim 1, wherein when the feature points are iteratively matched by an approximate nearest neighbor algorithm, the method further comprises:
and calculating the distance ratio between the feature points corresponding to the last matching in each matching, and eliminating the corresponding feature points when the distance ratio is smaller than a preset ratio.
7. A low-carbon substation scene reconstruction system based on a visual synchronous positioning and mapping system, which is characterized by comprising:
the layout module is used for acquiring scene information of a target scene, determining layout constraint conditions of the camera group based on the scene information, and determining a layout scheme of the corresponding camera group by combining a genetic algorithm;
the extraction module is used for receiving the initial image shot by the camera group, extracting the background pattern in the initial image, deleting the background pattern, acquiring the image frame of the deleted initial image, and extracting a key frame image group by combining with a preset frame number extraction rule;
the depth image group module is used for correspondingly constructing a multi-camera system space perception model based on the coordinates of the camera group and the coordinates of the key frame image group, determining space feature point coordinates based on a line intersection principle through binocular cameras in the camera group, and estimating depth space data of the key frame image group by combining the multi-camera system space perception model and the space feature point coordinates to obtain a corresponding depth image group;
the iterative matching module is used for extracting the characteristic points of the depth map group based on an equal proportion characteristic transformation method, carrying out iterative matching on the characteristic points through an approximate nearest neighbor algorithm until the characteristic points reach the nearest point, and forming a characteristic track corresponding to the characteristic points based on an iterative matching process;
the curved surface reconstruction module is used for reconstructing a curved surface by a mobile least square method based on the characteristic points and the corresponding characteristic tracks to generate a three-dimensional scene corresponding to the target scene;
the step of estimating depth space data of the key frame image group by combining the space perception model of the multi-camera system and the space feature point coordinates to obtain a corresponding depth image group comprises the following steps:
based on the multi-camera system space perception model, determining the corresponding relation between three-dimensional points in space and two-dimensional points in a pixel coordinate system, and determining the corresponding relation between the characteristic points in the target scene and the three-dimensional scene by combining the space characteristic point coordinates; and regressing the corresponding dense depth map by taking the corresponding key frame image group as a reference image according to the corresponding relation between the characteristic points in the target scene and the three-dimensional scene, and determining the corresponding depth map group through the dense depth map.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method for reconstructing a low-carbon substation scenario based on a visual synchrony positioning and mapping system according to any one of claims 1 to 6 when said program is executed by said processor.
9. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the method for reconstructing a low-carbon substation scenario based on a visual synchrony positioning and mapping system according to any one of claims 1 to 6.
CN202310722014.3A 2023-06-19 2023-06-19 Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system Active CN116452776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310722014.3A CN116452776B (en) 2023-06-19 2023-06-19 Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310722014.3A CN116452776B (en) 2023-06-19 2023-06-19 Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system

Publications (2)

Publication Number Publication Date
CN116452776A CN116452776A (en) 2023-07-18
CN116452776B true CN116452776B (en) 2023-10-20

Family

ID=87124155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310722014.3A Active CN116452776B (en) 2023-06-19 2023-06-19 Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system

Country Status (1)

Country Link
CN (1) CN116452776B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005063012A (en) * 2003-08-08 2005-03-10 Nippon Telegr & Teleph Corp <Ntt> Full azimuth camera motion and method and device for restoring three-dimensional information and program and recording medium with the same recorded
US10055898B1 (en) * 2017-02-22 2018-08-21 Adobe Systems Incorporated Multi-video registration for video synthesis
CN110349250A (en) * 2019-06-28 2019-10-18 浙江大学 A kind of three-dimensional rebuilding method of the indoor dynamic scene based on RGBD camera
CN110363821A (en) * 2019-07-12 2019-10-22 顺丰科技有限公司 Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle
CN110555901A (en) * 2019-09-05 2019-12-10 亮风台(上海)信息科技有限公司 Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN113012212A (en) * 2021-04-02 2021-06-22 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN113298934A (en) * 2021-05-26 2021-08-24 重庆邮电大学 Monocular visual image three-dimensional reconstruction method and system based on bidirectional matching
WO2022002150A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
CN115471573A (en) * 2022-09-15 2022-12-13 齐丰科技股份有限公司 Method for correcting presetting bit offset of transformer substation cloud deck camera based on three-dimensional reconstruction

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139445B (en) * 2015-08-03 2018-02-13 百度在线网络技术(北京)有限公司 Scene reconstruction method and device
CN111599001B (en) * 2020-05-14 2023-03-14 星际(重庆)智能装备技术研究院有限公司 Unmanned aerial vehicle navigation map construction system and method based on image three-dimensional reconstruction technology
CN111860225B (en) * 2020-06-30 2023-12-12 阿波罗智能技术(北京)有限公司 Image processing method and device, electronic equipment and storage medium
US11721030B2 (en) * 2021-06-10 2023-08-08 Sony Group Corporation Method of autonomous hierarchical multi-drone image capturing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005063012A (en) * 2003-08-08 2005-03-10 Nippon Telegr & Teleph Corp <Ntt> Full azimuth camera motion and method and device for restoring three-dimensional information and program and recording medium with the same recorded
US10055898B1 (en) * 2017-02-22 2018-08-21 Adobe Systems Incorporated Multi-video registration for video synthesis
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN110349250A (en) * 2019-06-28 2019-10-18 浙江大学 A kind of three-dimensional rebuilding method of the indoor dynamic scene based on RGBD camera
CN110363821A (en) * 2019-07-12 2019-10-22 顺丰科技有限公司 Acquisition methods, device, camera and the storage medium at monocular camera installation deviation angle
CN110555901A (en) * 2019-09-05 2019-12-10 亮风台(上海)信息科技有限公司 Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
WO2022002150A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
CN113012212A (en) * 2021-04-02 2021-06-22 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN113298934A (en) * 2021-05-26 2021-08-24 重庆邮电大学 Monocular visual image three-dimensional reconstruction method and system based on bidirectional matching
CN115471573A (en) * 2022-09-15 2022-12-13 齐丰科技股份有限公司 Method for correcting presetting bit offset of transformer substation cloud deck camera based on three-dimensional reconstruction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation;Jiazhao Zhang;《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;全文 *
单目视觉的同时三维场景构建和定位算法;沈晔湖;刘济林;杜歆;;光学学报(05);全文 *
基于三维参数化模型的水利工程设计体系及应用;王艳波;《黄河水利职业技术学院学报》;全文 *

Also Published As

Publication number Publication date
CN116452776A (en) 2023-07-18

Similar Documents

Publication Publication Date Title
WO2021077720A1 (en) Method, apparatus, and system for acquiring three-dimensional model of object, and electronic device
CN110400363B (en) Map construction method and device based on laser point cloud
CN110335343B (en) Human body three-dimensional reconstruction method and device based on RGBD single-view-angle image
KR101923845B1 (en) Image processing method and apparatus
JP7224604B2 (en) Vehicle inspection system and method
CN113192179A (en) Three-dimensional reconstruction method based on binocular stereo vision
EP3756163B1 (en) Methods, devices, and computer program products for gradient based depth reconstructions with robust statistics
CN113724368B (en) Image acquisition system, three-dimensional reconstruction method, device, equipment and storage medium
CN113643434B (en) Three-dimensional modeling method based on air-ground cooperation, intelligent terminal and storage device
CN113379901A (en) Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data
KR101593316B1 (en) Method and apparatus for recontructing 3-dimension model using stereo camera
WO2024088071A1 (en) Three-dimensional scene reconstruction method and apparatus, device and storage medium
CN117456114B (en) Multi-view-based three-dimensional image reconstruction method and system
US20240087231A1 (en) Method, apparatus, computer device and storage medium for three-dimensional reconstruction of indoor structure
CN112132971B (en) Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium
CN112102504A (en) Three-dimensional scene and two-dimensional image mixing method based on mixed reality
CN116452776B (en) Low-carbon substation scene reconstruction method based on vision synchronous positioning and mapping system
CN112017259A (en) Indoor positioning and image building method based on depth camera and thermal imager
CN113853559A (en) Control method, device and equipment of movable platform and storage medium
US20210241430A1 (en) Methods, devices, and computer program products for improved 3d mesh texturing
CN113808185B (en) Image depth recovery method, electronic device and storage medium
CN113225484A (en) Method and device for rapidly acquiring high-definition picture shielding non-target foreground
TW201816725A (en) Method for improving occluded edge quality in augmented reality based on depth camera
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN111369651A (en) Three-dimensional expression animation generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant