CN114898055A - Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station - Google Patents

Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station Download PDF

Info

Publication number
CN114898055A
CN114898055A CN202210440402.8A CN202210440402A CN114898055A CN 114898055 A CN114898055 A CN 114898055A CN 202210440402 A CN202210440402 A CN 202210440402A CN 114898055 A CN114898055 A CN 114898055A
Authority
CN
China
Prior art keywords
model
converter station
camera
point cloud
virtual model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210440402.8A
Other languages
Chinese (zh)
Inventor
樊培培
董翔宇
朱涛
李腾
张俊杰
廖军
罗沙
刘锋
王刘芳
黄道均
张学友
刘鑫
尼晓辉
邵华
张晗
王旗
谢佳
吴迪
石玮佳
柯艳国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd
State Grid Anhui Electric Power Co Ltd
Original Assignee
Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd
State Grid Anhui Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd, State Grid Anhui Electric Power Co Ltd filed Critical Super High Voltage Branch Of State Grid Anhui Electric Power Co ltd
Priority to CN202210440402.8A priority Critical patent/CN114898055A/en
Publication of CN114898055A publication Critical patent/CN114898055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a virtual reality-based extra-high voltage converter station remote intelligent inspection method and system, and belongs to the technical field of extra-high voltage converter station intelligent inspection. The method comprises the steps of obtaining point cloud data of a converter station, wherein the point cloud data comprise geometric positions and color information; denoising and format converting the point cloud data to generate a point cloud mesh file; and performing three-dimensional modeling according to the point cloud grid file to generate a virtual model. The method and the system for the remote intelligent inspection of the ultra-high voltage converter station based on the virtual reality provided by the invention have the advantages that the virtual model of the converter station is constructed, and the video information of the camera in the converter station is processed to realize the fusion of the video information and the virtual model, so that inspection personnel can visually know the field condition of the converter station, and the aim of remote inspection by the inspection personnel is fulfilled; meanwhile, the maintenance capability of the converter station equipment by enterprises is improved, and the maintenance cost of the equipment is reduced.

Description

Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station
Technical Field
The invention relates to the technical field of intelligent inspection of an extra-high voltage converter station, in particular to a method and a system for remote intelligent inspection of the extra-high voltage converter station.
Background
"reality" in the virtual reality-based ultra-high voltage converter station remote inspection technology generally refers to any thing or environment existing in the world in a physical or functional sense, and can be actually realizable, or can be difficult to realize in practice or impossible to realize at all. And "virtual" means computer-generated. Therefore, the virtual reality refers to a special environment generated by a computer, and people can use various special devices to 'project' the process and the result of the inspection of the extra-high voltage converter station to the environment, check, operate and control the environment, realize the purpose of remote inspection, and ensure the safe operation of the extra-high voltage converter station.
In order to realize the purpose of remote inspection, the most common technology at present is the internet of things technology, that is, any object is connected with a network through information sensing equipment according to an agreed protocol, and the object performs information exchange and communication through an information transmission medium so as to realize the functions of intelligent identification, positioning, tracking, supervision and the like.
The core technology may include: 1. sensor technology, which is also a key technology in computer applications. Most computers process digital signals. Since computers became available, sensors were required to convert analog signals to digital signals for processing by computers.
2. The RFID tag is also a sensor technology, the RFID technology is a comprehensive technology integrating a radio frequency technology and an embedded technology, and the RFID has wide application prospects in automatic identification and article logistics management.
3. The embedded system technology comprises the following steps: the method is a complex technology integrating computer software and hardware, sensor technology, integrated circuit technology and electronic application technology. After decades of evolution, intelligent terminal products characterized by embedded systems are seen everywhere; satellite system for space flight and aviation. Embedded systems are changing people's lives and driving the development of industrial production and the defense industry. If a simple metaphor is made for the human body of the Internet of things, the sensors are equivalent to senses of eyes, nose, skin and the like of the human body, the network is a nervous system for transmitting information, and the embedded system is the brain of the human body and is classified after receiving the information. This example visually describes the location and role of the sensors and embedded systems in the internet of things.
4. The intelligent technology comprises the following steps: are various methods and means employed to exploit knowledge in order to effectively achieve some desired objective. By implanting the intelligent system into the object, the object can have certain intelligence, communication with a user can be actively or passively realized, and the intelligent system is also one of key technologies of the Internet of things.
Through the technical construction, the required information can be acquired through the sensor network, a customer can acquire the required data information by using an RFID reader-writer and related sensors in practice, and the required data information can be smoothly transmitted to a specified application system through a wireless network remote after the gateway terminal is converged. In addition, the sensor can also use technologies such as ZigBee and Bluetooth to achieve the purpose of effective communication with the sensor gateway. Most sensors available on the market can detect relevant parameters, including pressure, humidity or temperature.
However, the existing inspection technology can only monitor and control specific environments and elements of related front-end monitoring sensors, and the application scene is narrow; and related numerical values can only be read through a front-end system, and the visual display is not enough, so that the site conditions, the positions and the like cannot be visually displayed.
The inventor of the present application finds, in the process of implementing the present invention, that the above-mentioned solution of the prior art has the defects of narrow application scenario and inconvenience for visually demonstrating the situation in the field.
Disclosure of Invention
The invention aims to provide a virtual reality-based extra-high voltage converter station remote intelligent inspection method and system, which can facilitate inspection personnel to intuitively know the field condition of a converter station so as to improve the maintenance capability of enterprises on equipment.
In order to achieve the above object, an embodiment of the present invention provides a virtual reality-based method for remotely and intelligently inspecting an extra-high voltage converter station, including:
acquiring point cloud data of a converter station, wherein the point cloud data comprises geometric positions and color information;
denoising and format conversion are carried out on the point cloud data to generate a point cloud mesh file;
performing three-dimensional modeling according to the point cloud grid file to generate a virtual model;
calibrating a camera in the virtual model;
acquiring video information of the camera in the converter station;
fusing the video information with the virtual model;
and monitoring and identifying the projected virtual model.
Optionally, the three-dimensional modeling according to the point cloud mesh file to generate a virtual model comprises:
constructing a high-precision model of the whole-line three-dimensional map;
carrying out UV splitting and planning operation on the whole line three-dimensional map high-precision model;
performing material drawing operation on the whole-line three-dimensional map high-precision model;
carrying out mapping drawing operation on the whole-line three-dimensional map high-precision model;
integrating to form the virtual model according to the UV split and plan operations, the material drawing operations, and the chartlet drawing operations.
Optionally, calibrating the camera in the virtual model includes:
acquiring a world coordinate system of a camera in the converter station;
calculating a projection coordinate system of the camera according to formula (1),
Figure BDA0003613727050000031
wherein,
Figure BDA0003613727050000041
x ', y ', z ' are the three coordinate axes of the projection coordinate system,
Figure BDA0003613727050000042
in order to transform the matrix, the matrix is,
Figure BDA0003613727050000043
and x, y and z are three coordinate axes of the physical coordinate system of the camera respectively.
Optionally, calibrating the camera in the virtual model further includes:
acquiring the lens focal length of the camera and the short edge length of the photosensitive chip;
calculating a focal length viewport of the camera according to equation (2),
fov=2*actan(y/2f)*180°/π, (2)
wherein fov is the focal length view port, f is the lens focal length of the camera, and y is the length of the short side of the photosensitive chip of the camera;
acquiring a picture of the camera according to the focal length view port;
acquiring all grids intersected by the picture and the visual centrum in the virtual model;
rendering all the grids.
Optionally, fusing the video information with the virtual model includes:
acquiring a projection area needing to be projected;
covering the visual pyramid with the projection region;
matching the picture of the visual cone with the projection area to obtain a model area to be reconstructed;
cutting the model area to be reconstructed to obtain a reconstructed model area;
mapping the UV coordinates of the reconstructed model area to 0-1;
cutting a space area out of the UV coordinates in the range of 0-1;
the vertex coordinates of the clipping space are calculated according to formula (3),
o.pos=mul(unity_maxtrix_mvp,v.vertex),
o.texc=mul(unity_projector,v.vertex), (3)
the clipping space is divided into a plurality of regions, wherein o.pos is a vertex coordinate of the clipping space, unity _ maxtrix _ mvp is a model observation projection matrix, v.vertex is a vertex coordinate of the model space, unity _ Projector is a projection matrix transmitted into a material ball by a Projector component, and o.texc is a texture coordinate of the clipping space.
Optionally, fusing the video information with the virtual model further includes:
transforming the vertex of the visual angle of the camera to a projection plane;
acquiring a depth map generated by the screen depth under the current visual angle;
and comparing the depth of the depth map with the depth of the model in the current space, and removing the back of the model to fuse the video information and the model.
Optionally, the monitoring and identifying the virtual model after projection includes:
constructing a convolutional neural network model;
acquiring training data;
preprocessing the training data;
inputting the preprocessed training data into the convolutional neural network to train the convolutional neural network.
Optionally, the preprocessing the high-quality data includes:
denoising the training data;
performing image enhancement and image restoration on the training data;
and carrying out normalization processing on the training data.
On the other hand, the invention also provides a virtual reality-based system for remote intelligent inspection of the extra-high voltage converter station, which comprises the following components:
the cameras are arranged at a plurality of vertex positions of the converter station and used for acquiring a real-time picture of the converter station;
and the controller is connected with the plurality of cameras, is used for synchronizing real-time pictures of the plurality of cameras into the three-dimensional model of the converter station, and is used for executing the method.
In still another aspect, the present invention further provides a computer-readable storage medium, wherein the computer-readable storage medium stores instructions for being read by a controller to cause the controller to execute the control method as described in any one of the above.
By the technical scheme, the virtual model of the converter station is constructed, and the video information of the camera in the converter station is processed to realize the fusion of the video information and the virtual model, so that inspection personnel can visually know the field condition of the converter station, and the aim of remotely inspecting the converter station by the inspection personnel is fulfilled; meanwhile, the maintenance capability of the converter station equipment by enterprises is improved, and the maintenance cost of the equipment is reduced.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a flow chart of a method for virtual reality based remote intelligent inspection of an extra-high voltage converter station according to an embodiment of the invention;
FIG. 2 is a flow chart of three-dimensional modeling in a virtual reality-based extra-high voltage converter station remote intelligent inspection method according to an embodiment of the invention;
FIG. 3 is a flow chart of calibrating a camera in a virtual reality-based method for remote intelligent inspection of an extra-high voltage converter station according to an embodiment of the invention;
FIG. 4 is a flow chart of calibrating a camera in the virtual reality-based extra-high voltage converter station remote intelligent inspection method according to one embodiment of the invention;
FIG. 5 is a flow chart of fusion of video information and a virtual model in a virtual reality-based method for remote intelligent inspection of an extra-high voltage converter station according to an embodiment of the invention;
FIG. 6 is a flow chart of fusion of video information and a virtual model in a virtual reality-based method for remote intelligent inspection of an extra-high voltage converter station according to an embodiment of the invention;
FIG. 7 is a flow chart of monitoring and identification of virtual models in a virtual reality-based method for remote intelligent inspection of an extra-high voltage converter station according to an embodiment of the invention;
FIG. 8 is an exemplary diagram of a virtual model of a converter station in a virtual reality-based method for remote intelligent inspection of an extra-high voltage converter station according to one embodiment of the invention;
FIG. 9 is an exemplary diagram of Internet of things monitoring data of a virtual model of a converter station in a virtual reality-based extra-high voltage converter station remote intelligent inspection method according to one embodiment of the invention;
fig. 10 is an exemplary diagram of visual intelligent identification in a virtual reality-based extra-high voltage converter station remote intelligent inspection method according to an embodiment of the invention.
Detailed Description
The following describes in detail embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flow chart of a virtual reality-based method for remote intelligent inspection of an extra-high voltage converter station according to an embodiment of the invention. In fig. 1, the method for remote intelligent patrol of the ultra-high voltage converter station based on virtual reality may include:
in step S10, point cloud data of the converter station is obtained, wherein the point cloud data includes geometric position and color information. The point cloud data of the converter station needs to be acquired in advance, so that the subsequent three-dimensional modeling operation is facilitated. Specifically, a laser scanner is adopted to scan a convertor station scene and equipment needing modeling, and point cloud data such as geometric positions and color information are acquired. When point cloud data of convertor station equipment are scanned, the scanner is ensured to be established in a stable environment, the three-dimensional scanning result is ensured not to be influenced by external factors, and the scanning interval and precision are set to ensure the data integrity. In addition, for important spaces (converter transformer, valve hall, etc.) such as a converter station equipment room, secondary scanning is required, and the scanning interval and precision are required to be set in advance to ensure data integrity. When the converter station is scanned by using the scanner, continuous scanning needs to be carried out from different angles in sequence, and the placement position of the scanner is changed, so that the positioning ball body is ensured to be replaced in sequence, and the omnibearing scanning of the converter station is realized.
In step S11, the point cloud data is denoised and converted into a format to generate a point cloud mesh file. After the acquired point cloud data is acquired, the point cloud data needs to be preprocessed. Specifically, the preprocessing includes two parts of denoising the point cloud data and converting the file format standard. Specifically, the denoising operation of the point cloud data is mainly to perform scene restoration on the converter station point cloud data, trim redundant point cloud data in a scene, and delete abnormal point cloud data. After the converter station is scanned, manually splicing point cloud data scanned by a laser scanner to generate a three-dimensional point cloud map of the converter station, and removing noise of the three-dimensional point cloud data in the three-dimensional point cloud map and thinning and smoothing the three-dimensional point cloud data by an operator. The standard format conversion of the point cloud data is to convert the format of the collected native point cloud data into a data format usable by modeling software, namely a point cloud mesh file.
In step S12, three-dimensional modeling is performed from the point cloud mesh file to generate a virtual model. After the point cloud grid file is obtained, the preprocessed point cloud data can be converted into a three-dimensional model through a three-dimensional modeling software tool.
In step S13, the camera in the virtual model is calibrated. In order to enable the virtual model to project the picture of the camera in the converter station more accurately, the position of the camera in the virtual model and the viewport need to be calibrated.
In step S14, video information of a camera in the converter station is acquired. In order to project the real state of the converter station into the virtual model in real time, video information of a camera in the converter station in the real state needs to be acquired.
In step S15, the video information is fused with the virtual model. After the video information of the camera in the converter station in the real state is acquired, the video information monitored by the camera in the converter station needs to be fused with the model in the virtual model, so that the patrol personnel can remotely observe the real-time condition of the converter station.
In step S16, the projected virtual model is monitored and identified. After the video information and the virtual model are fused, the virtual model can visually display the specific conditions of the converter station in the real state, and at the moment, the virtual model can be monitored and identified in real time by inspection personnel or visual identification equipment so as to achieve the purpose of remote inspection.
In steps S10 to S16, a laser scanner is used to scan the converter station scene and equipment to obtain point cloud data, the point cloud data is denoised and converted into a format to generate a point cloud grid file usable by three-dimensional modeling software, and the virtual model is generated by modeling the three-dimensional modeling software. After the virtual model is obtained, calibrating the position and the viewport of a camera in the virtual model, then obtaining video information of the camera in the convertor station, and fusing the video information and the virtual model so as to display the condition of the convertor station in reality through the virtual model. And finally, monitoring and identifying the virtual model in real time through inspection personnel or other equipment so as to realize the purpose of remote inspection.
The traditional internet of things technology commonly adopted for remote inspection is that any object is connected with a network through information sensing equipment according to an agreed protocol so as to realize functions of intelligent identification, positioning, tracking, supervision and the like. However, the remote inspection method has strong limitation and can only be applied to the specific environment and elements of the existing related front-end monitoring sensor for monitoring and controlling; meanwhile, the remote inspection method is not intuitive enough and is not convenient for inspection personnel to use. In the embodiment of the invention, the converter station is modeled by adopting three-dimensional modeling software, and the video information of the camera in the converter station is fused with the virtual model, so that the aim of remote inspection by inspection personnel can be fulfilled, the real-time state of the converter station can be displayed more intuitively, various inspection problems which cannot be identified or detected by the traditional Internet of things are solved, and the inspection content of remote inspection is greatly expanded. Meanwhile, the method further improves the maintenance capability, the basic failure troubleshooting capability and the overhauling capability of the converter station equipment by enterprises; and the synchronization also improves the skill quality of personnel, reduces the maintenance and repair cost of the converter station equipment, and reduces the loss caused by the wrong use of the equipment.
In this embodiment of the present invention, in order to obtain the virtual model of the converter station, the preprocessed point cloud data needs to be transmitted to the three-dimensional modeling software to generate the three-dimensional virtual model. In particular, the method may further comprise the steps as shown in fig. 2. In fig. 2, the method may further include:
in step S20, a full-line three-dimensional map high-precision model is constructed. The three-dimensional map high-precision model construction uses the most advanced PBR flow (physics-Based Rendering) model construction technology in the current market. The use of PBR has several advantages: firstly, the construction of the converter station three-dimensional map material by using the PBR method is easier, and the material attribute is not required to be guessed any more when being constructed, but is set according to the real converter station data, so that the material of the converter station digital model can be restored to the maximum extent; secondly, the material looks correct under all the illumination conditions of the converter station, and meets the use requirements of multi-environment scenes; and thirdly, the PBR provides a stable art work flow, and the construction efficiency of the three-dimensional map digital model of the convertor station is improved. After materials necessary for modeling of various three-dimensional maps in the previous stage are prepared, the materials need to be introduced into DCC (Digital Content Creation) software, and then one-to-one topology is carried out by referring to point cloud grid data of the three-dimensional maps so as to obtain high-precision position information of the Digital models of the three-dimensional maps of the converter station. The construction of the three-dimensional map digital model strictly complies with the design specification of the current digital model, and the point, line and plane are reasonably constructed, so that the three-dimensional map digital model with the best effect is constructed with the least resource consumption.
In step S21, UV splitting and planning operations are performed on the entire three-dimensional map high-precision model. In order to enable each converter station three-dimensional map digital model to have a correct texture display effect, the converter station three-dimensional map model needs to be subjected to UV splitting and planning. The current mainstream UV splitting tool is required to be used, the UV is split strictly according to the design specification, the UV is reasonably arranged, any UV space is not wasted, and the utilization rate of the map is improved as much as possible in the limited space.
In step S22, a texture rendering operation is performed on the full-line three-dimensional map high-precision model. The high-precision model material drawing of the whole-line three-dimensional map needs to be realized by means of a PBR process, and the attributes of the real object are repeatedly engraved in DCC software by observing the texture and the texture of the real object.
In step S23, the high-precision model of the full-line three-dimensional map is subjected to a map drawing operation. After the three-dimensional map high-precision model is manufactured, height information, basic texture information, ambient light shielding information, Normal information, metal degree information and roughness information of the three-dimensional map model of the converter station are stored in corresponding Diffuse, AO, Normal, Materic and Glossless texture maps, Materic, Glossless and AO are combined into a Mask picture in the later period, and performance consumption of dynamic real-time calculation of a computer is reduced by pre-storing the information.
In step S24, the virtual model is formed according to the UV split and plan operation, the texture drawing operation, and the map drawing operation. The built three-dimensional model assembly is integrated into a complete area model, and the model integration needs to meet the following model specifications in the process of completing the whole modeling work:
(1) and the axial correction is carried out, so that the axial correctness of the model is ensured, and the correct interaction effect with real-time data of the converter station at the later stage is facilitated.
(2) And resetting the digital model scaling information, and enabling the object to be displayed normally in the engine.
(3) And redundant materials are removed, unnamed materials are processed, and the maintainability and the performance of the scene are improved.
(4) And (4) assigning a mapping path, wherein the mapping of the model is assigned to the same folder, so that the loss of mapping path information is prevented.
(5) And optimizing the model structure and reducing the number of point, line and surface.
(6) The same material mapping is used for the same model.
(7) The model naming is standard, and later development is facilitated.
(8) The size of the map is reasonable, and the content is clear.
(9) And layer group information in DCC software is clear, so that later maintenance is facilitated.
(10) The model units are adjusted to fit the actual object units.
When the above specifications are confirmed, a digital twin three-dimensional model can be built for the extra-high voltage converter station, and an application carrier is provided for remote intelligent virtual inspection.
In steps S20 to S24, a full-line three-dimensional map high-precision model is constructed, then UV splitting and planning, model material drawing, and model mapping are performed on the model in sequence, and after drawing is completed, integration is performed according to the requirements of model specifications to form a virtual model of the converter station. The modeling mode has high precision and accurate model reduction, so that the follow-up remote inspection of inspection personnel is facilitated.
In the embodiment of the invention, in order to facilitate the subsequent fusion of the video information of the camera in the converter station and the virtual model, the projection coordinates of the camera in the virtual model also need to be calibrated. In particular, the method may further comprise the steps as shown in fig. 3. Specifically, in fig. 3, the method may further include:
in step S30, a world coordinate system of a camera in the converter station is acquired. When virtual models of the converter station are constructed, each model is located in a local coordinate system of the model, and an object needs to be referenced to a fixed coordinate origin in real world or virtual space of a computer to be able to determine the position of the object. Therefore, in order to accurately calibrate the projection coordinates of the camera in the virtual model, a point cloud scan is needed through a laser scanner to acquire the physical position of the camera in the converter station, i.e., the world coordinate system of the camera.
In step S31, the projection coordinate system of the camera is calculated according to formula (1),
Figure BDA0003613727050000121
wherein,
Figure BDA0003613727050000122
is a projection coordinate system, x ', y ' and z ' are three coordinate axes of the projection coordinate system respectively,
Figure BDA0003613727050000123
in order to transform the matrix, the matrix is,
Figure BDA0003613727050000124
and the coordinate system is a world coordinate system, and x, y and z are three coordinate axes of a physical coordinate system of the camera respectively. Through matrix transformation, coordinates are transformed from a world coordinate space to a screen coordinate space of a projector, and points mapped to the screen coordinate space of the projector are replaced by video pixels to be fused when rendering is carried out, so that the effect of video fusion is achieved.
In steps S30 to S31, the inspector performs point cloud scanning by using the laser scanner to obtain a physical position of the camera in the converter station, that is, a world coordinate system of the camera, and converts the world coordinate system into projection coordinates of the camera in the virtual model, thereby calibrating the projection coordinates of the camera in the virtual model, so as to facilitate the fusion of subsequent video information and the virtual model.
In the embodiment of the invention, in order to facilitate the subsequent fusion of the video information of the camera in the converter station and the virtual model, the position of the camera in the virtual model needs to be calibrated. In particular, the method may further comprise the steps as shown in fig. 4. Specifically, in fig. 4, the method may further include:
in step S40, the lens focal length of the camera and the photosensitive chip short side length are acquired. The calibration of the monitoring point position of the camera includes the focal length of the lens and the length of the short side of the photosensitive chip, and also includes the physical position to be monitored and the resolution of the image. The monitoring physical position and the picture resolution function are used for ensuring that the video projection position is accurately matched with the virtual model position, and the monitoring focal length function is used for recording the current video, so that the monitoring picture scaling is further controlled through the visual angle scaling of the virtual model, and the video fusion effect is enhanced.
In step S41, the focal length viewport of the camera is calculated according to formula (2),
fov=2*actan(y/2f)*180°/π, (2)
wherein fov is a focal length view port, f is a lens focal length of the camera, and y is a short side length of a photosensitive chip of the camera.
In step S42, a screen of the camera is acquired from the focus viewport. After the focal length view port of the camera is determined, the picture inside the focal length view port can be obtained.
In step S43, all meshes where the screen intersects the visual cone in the virtual model are acquired. Wherein, the view cone is the projection range of the camera in the virtual model.
In step S44, all meshes are rendered. And rendering once again after the normal rendering of all the grids is finished, and using the local shader. A matrix of ClipToDecal is passed into the shader, and the coordinates mapped into the decal box are calculated in FS. The xy coordinates can be used as uv coordinates, parts except the uv coordinates of 0-1 are cut out, and the focal is rendered. After point location calibration is carried out on monitoring, video fusion can be carried out on transmitted images.
In steps S40 to S44, the focal length of the lens and the length of the short side of the photo sensor chip of the camera are obtained, and the focal length view port of the camera is calculated according to the focal length of the lens and the length of the short side of the photo sensor chip. And secondly, acquiring all the meshes intersected in the picture according to the focal length view port of the camera, and rendering and secondarily rendering the vertexes of all the meshes. Through the operation of calibrating the monitoring position of the camera, the fusion of subsequent video pictures and a virtual model can be facilitated.
In this embodiment of the invention, in order to fuse the video information of the camera with the virtual model, it is also necessary to reconstruct the mesh within the view volume of the camera. In particular, the method may further comprise the steps as shown in fig. 5. Specifically, in fig. 5, the method may further include:
in step S50, a projection region that needs to be projected is acquired. Wherein, projection areas are determined in sequence according to the actual projection planning condition.
In step S51, the view volume is overlaid with the projection region. Wherein, the visual cone of the camera covers the projection area, so as to facilitate the grid reconstruction of the subsequent visual cone.
In step S52, the view of the view volume and the projection area are matched to obtain a model area to be reconstructed. Firstly, judging which objects are intersected with the projection area, and if any two convex objects exist, the projections of the two convex objects are not intersected on one axis; conversely, if any two convex objects intersect in all projection axes, the two objects intersect. According to the method, a model of the intersection of the video information of the camera of the converter station and the projection area can be obtained.
In step S53, the model region to be reconstructed is clipped to obtain a reconstructed model region. And performing edge-by-edge cutting on the obtained model region to be reconstructed to obtain the reconstructed model region.
In step S54, the UV coordinates of the reconstructed model region are mapped between 0-1. Wherein, the scope and value range of u and v are both [0,1], and UV is a set of data recording how the map on the model should be pasted and the data pasted. The UV coordinates are two-dimensional 0-1 coordinates, each coordinate corresponds to the vertex data of the Mesh grid, and the positions of the mapping percentages are represented by 0-1 in the coordinates. Taking a square grid as an example, if one label is to be fully covered, only the UV of 4 vertices in the Mesh grid needs to be set to (0, 0) lower left corner, (0, 1) lower right corner, (1, 0) upper left corner, and (1, 1) upper right corner, which correspond to the four corners of the label, respectively.
In step S55, the spatial region outside the UV coordinates in 0-1 is clipped.
In step S56, the vertex coordinates of the clip space are calculated according to formula (3),
o.pos=mul(unity_maxtrix_mvp,v.vertex),
o.texc=mul(unity_projector,v.vertex), (3)
where o.pos is the vertex coordinate of the clipping space, unity _ maxtrix _ mvp is the model observation projection matrix, v.vertex is the vertex coordinate of the model space, unity _ Projector is the projection matrix that the Projector component transmits into the material sphere, and o.texc is the texture coordinate of the clipping space. In contrast to the conventional spatial projection Transform matrix MVP (model View project), where V is the correlation matrix of the Projector space, i.e., the word ToLocalMatrix variable of Transform to which the Projector component belongs; and P is the matrix associated with Projector near-far clipping. After the unit _ projector matrix is used to calculate the coordinates of vertex (vertex) in the projection space, we can draw the object with the coordinates as uv coordinates. The vertex can be rendered on a screen after being converted by a plurality of coordinate spaces, and the vertex needs to pass through a model space, a world space and an observation space in total and is converted into a clipping space finally; in addition, clipping operation is carried out on the rendering graphics primitives according to the cone, and the graphics primitive area in the visible range is reserved.
In steps S50 to S56, a region that needs to be projected in the virtual model is obtained, the view cone of the camera covers the projection region, and the projection region and the view cone of the camera are matched to obtain a model region to be reconstructed. And cutting the model area to be reconstructed edge to obtain a reconstructed model area, mapping the UV coordinates of the reconstructed model area to the range of 0-1, and cutting out the space area except the UV coordinates in the range of 0-1. And finally, calculating the vertex coordinates of the clipping space so as to facilitate the subsequent drawing of the object.
In this embodiment of the present invention, in order to achieve fusion of the video information and the virtual model, the depth of the screen also needs to be processed. In particular, the method may further comprise the steps as shown in fig. 6. Specifically, in fig. 6, the method may further include:
in step S60, the vertex of the angle of view of the camera is transformed to the projection plane. When the top point of the visual angle of the camera is transformed to the projection plane, the coordinate is transformed to the uv value domain, and then the depth map of the projection plane can be conveniently obtained.
In step S61, a depth map generated by the screen depth at the current view angle is acquired.
In step S62, the depth map is compared with the depth of the model in the current space, and the back of the model is removed to fuse the video information and the model. The depth map is compared with the depth of the model in the current space, the part with larger difference between the depth of the model in the current space and the depth map can be found, and the part with difference on the back of the model is removed, so that the aim of fusing video information and the virtual model is fulfilled. Specifically, an example of fusion of the video information and the virtual model of the converter station may be as shown in fig. 8 and 9.
In steps S60 to S62, the vertex of the angle of view of the camera is transformed to the projection plane, then the depth map of the projection plane is obtained, and the depth map is compared with the depth of the model in the current space, so as to find out the part of the depth difference and propose the part, which can realize the fusion of the video information and the virtual model, thereby facilitating the inspection of the inspection staff.
In the embodiment of the invention, when the parameters of each camera are adjusted to realize effective fusion of the video information and the three-dimensional model, the parameters are recorded and saved in the database. When the video of a certain camera needs to be watched, the stored parameters are read, the parameters of the camera are automatically adjusted, the video information of the camera is projected into the three-dimensional model and fused with the model, and then the visual display of the real-time condition of the converter station can be realized.
In the embodiment of the present invention, in order to improve the intellectualization of the inspection of the converter station, the real-time situation of the virtual model also needs to be visually recognized. In particular, the method may further comprise the steps as shown in fig. 7. Specifically, in fig. 7, the method may further include:
in step S70, a convolutional neural network model is constructed.
In step S71, training data is acquired. Wherein the training data is data of all fault problems and classification of the fault problems of the converter station.
In step S72, the training data is preprocessed. The preprocessing can include denoising processing, image enhancement, image restoration and normalization processing. The denoising processing can improve the signal-to-noise ratio of the training data to reduce interference; the image enhancement and the image restoration mainly aim at images which are not clear enough or have damage and loss so as to improve the integrity of the images; the normalization processing is to reduce the overhead and improve the performance of the algorithm, and to successfully use the algorithm such as deep learning, which must use the normalization processing operation.
In step S73, the preprocessed training data is input to a convolutional neural network to train the convolutional neural network. When the preprocessed training data are obtained again, the data are input into the convolutional neural network model to train the model, and then different types of faults in the converter station can be identified through the convolutional neural network model, so that the purpose of intelligent inspection is achieved. Specifically, an example of visual intelligence to identify a fault may be as shown in fig. 10.
In steps S70 to S73, a convolutional neural network model is first constructed, then data of all fault problems and their classifications of the converter station are obtained as training data, and the training data is sequentially subjected to noise processing, image enhancement, image restoration, and normalization processing, and finally input to the convolutional neural network model and trained. The convolutional neural network model can monitor the fault condition of the three-dimensional virtual model of the converter station in real time, further reduce the workload of inspection personnel and improve the timeliness and accuracy of inspection of the converter station fault.
On the other hand, the invention also provides a virtual reality-based remote intelligent inspection system for the extra-high voltage converter station. In particular, the system may include a plurality of cameras and a controller.
The plurality of cameras are arranged at a plurality of vertex positions of the converter station and used for acquiring real-time pictures of the converter station. The controller is connected with the plurality of cameras, is used for synchronizing real-time pictures of the plurality of cameras into the three-dimensional model of the converter station, and is used for executing the method.
In yet another aspect, the present invention also provides a computer-readable storage medium, which may store instructions for reading by a controller to cause the controller to execute the control method as any one of the above.
By the technical scheme, the virtual model of the converter station is constructed, and the video information of the camera in the converter station is processed to realize the fusion of the video information and the virtual model, so that inspection personnel can visually know the field condition of the converter station, and the aim of remotely inspecting the converter station by the inspection personnel is fulfilled; meanwhile, the maintenance capability of the converter station equipment by enterprises is improved, and the maintenance cost of the equipment is reduced.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A virtual reality-based remote intelligent inspection method for an extra-high voltage converter station is characterized by comprising the following steps:
acquiring point cloud data of a converter station, wherein the point cloud data comprises geometric positions and color information;
denoising and format conversion are carried out on the point cloud data to generate a point cloud mesh file;
performing three-dimensional modeling according to the point cloud grid file to generate a virtual model;
calibrating a camera in the virtual model;
acquiring video information of the camera in the convertor station;
fusing the video information with the virtual model;
and monitoring and identifying the projected virtual model.
2. The method of claim 1, wherein three-dimensional modeling from the point cloud mesh file to generate a virtual model comprises:
constructing a high-precision model of the whole-line three-dimensional map;
carrying out UV splitting and planning operation on the whole line three-dimensional map high-precision model;
performing material drawing operation on the whole-line three-dimensional map high-precision model;
carrying out mapping drawing operation on the whole-line three-dimensional map high-precision model;
integrating to form the virtual model according to the UV split and plan operations, the material drawing operations, and the chartlet drawing operations.
3. The method of claim 1, wherein calibrating the camera in the virtual model comprises:
acquiring a world coordinate system of a camera in the converter station;
calculating a projection coordinate system of the camera according to formula (1),
Figure FDA0003613727040000021
wherein,
Figure FDA0003613727040000022
x ', y ', z ' are the three coordinate axes of the projection coordinate system,
Figure FDA0003613727040000023
in order to transform the matrix, the matrix is,
Figure FDA0003613727040000024
and x, y and z are three coordinate axes of the physical coordinate system of the camera respectively.
4. The method of claim 3, wherein calibrating the camera in the virtual model further comprises:
acquiring the lens focal length of the camera and the short edge length of the photosensitive chip;
calculating a focal length viewport of the camera according to equation (2),
fov=2*actan(y/2f)*180°/π, (2)
fov is the focal length view port, f is the lens focal length of the camera, and y is the length of the short side of the photosensitive chip of the camera;
acquiring a picture of the camera according to the focal length view port;
acquiring all grids intersected by the picture and the visual centrum in the virtual model;
rendering all the grids.
5. The method of claim 4, wherein fusing the video information with the virtual model comprises:
acquiring a projection area needing to be projected;
covering the visual pyramid with the projection region;
matching the picture of the visual cone with the projection area to obtain a model area to be reconstructed;
cutting the model area to be reconstructed to obtain a reconstructed model area;
mapping the UV coordinates of the reconstructed model area to 0-1;
cutting a space area outside the UV coordinate in the range of 0-1;
the vertex coordinates of the clipping space are calculated according to formula (3),
o.pos=mul(unity_maxtrix_mvp,v.vertex),
o.texc=mul(unity_projector,v.vertex), (3)
the clipping space is divided into a plurality of regions, wherein o.pos is a vertex coordinate of the clipping space, unity _ maxtrix _ mvp is a model observation projection matrix, v.vertex is a vertex coordinate of the model space, unity _ Projector is a projection matrix transmitted into a material ball by a Projector component, and o.texc is a texture coordinate of the clipping space.
6. The method of claim 5, wherein fusing the video information with the virtual model further comprises:
transforming the vertex of the visual angle of the camera to a projection plane;
acquiring a depth map generated by the screen depth under the current visual angle;
and comparing the depth of the depth map with the depth of the model in the current space, and removing the back of the model so as to fuse the video information and the model.
7. The method of claim 1, wherein monitoring and identifying the virtual model after projection comprises:
constructing a convolutional neural network model;
acquiring training data;
preprocessing the training data;
inputting the preprocessed training data into the convolutional neural network to train the convolutional neural network.
8. The method of claim 7, wherein preprocessing the training data comprises:
denoising the training data;
performing image enhancement and image restoration on the training data;
and carrying out normalization processing on the training data.
9. The utility model provides a system that special high voltage converter station remote intelligence patrolled and examined based on virtual reality which characterized in that includes:
the cameras are arranged at a plurality of vertex positions of the converter station and used for acquiring a real-time picture of the converter station;
a controller connected to the plurality of cameras for synchronizing real-time pictures of the plurality of cameras into the three-dimensional model of the converter station and for performing the method according to any of claims 1 to 8.
10. A computer-readable storage medium storing instructions for reading by a controller to cause the controller to execute the control method according to any one of claims 1 to 8.
CN202210440402.8A 2022-04-25 2022-04-25 Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station Pending CN114898055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210440402.8A CN114898055A (en) 2022-04-25 2022-04-25 Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210440402.8A CN114898055A (en) 2022-04-25 2022-04-25 Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station

Publications (1)

Publication Number Publication Date
CN114898055A true CN114898055A (en) 2022-08-12

Family

ID=82717438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210440402.8A Pending CN114898055A (en) 2022-04-25 2022-04-25 Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station

Country Status (1)

Country Link
CN (1) CN114898055A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612223A (en) * 2023-07-17 2023-08-18 金锐同创(北京)科技股份有限公司 Digital twin simulation space generation method, device, computer equipment and medium
CN116863087A (en) * 2023-06-01 2023-10-10 中国航空油料集团有限公司 Digital twinning-based navigation oil information display method and device and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863087A (en) * 2023-06-01 2023-10-10 中国航空油料集团有限公司 Digital twinning-based navigation oil information display method and device and readable storage medium
CN116863087B (en) * 2023-06-01 2024-02-02 中国航空油料集团有限公司 Digital twinning-based navigation oil information display method and device and readable storage medium
CN116612223A (en) * 2023-07-17 2023-08-18 金锐同创(北京)科技股份有限公司 Digital twin simulation space generation method, device, computer equipment and medium
CN116612223B (en) * 2023-07-17 2023-10-17 金锐同创(北京)科技股份有限公司 Digital twin simulation space generation method, device, computer equipment and medium

Similar Documents

Publication Publication Date Title
US11869192B2 (en) System and method for vegetation modeling using satellite imagery and/or aerial imagery
Zhang et al. Image engineering
CN114898055A (en) Virtual reality-based remote intelligent inspection method and system for extra-high voltage converter station
CN110880200A (en) Intelligent checking and accepting method for GIM model engineering based on three-dimensional reconstruction technology
CN109559381B (en) Transformer substation acceptance method based on AR space measurement technology
WO2023241097A1 (en) Semantic instance reconstruction method and apparatus, device, and medium
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN112802208B (en) Three-dimensional visualization method and device in terminal building
CN111770450B (en) Workshop production monitoring server, mobile terminal and application
CN114035515A (en) Digital twin system for discrete workshop production process and construction method thereof
CN115082254A (en) Lean control digital twin system of transformer substation
CN109064533A (en) A kind of 3D loaming method and system
CN111563961A (en) Three-dimensional modeling method and related device for transformer substation
CN114863061A (en) Three-dimensional reconstruction method and system for remote monitoring medical image processing
CN113627005B (en) Intelligent vision monitoring method
CN114202819A (en) Robot-based substation inspection method and system and computer
CN116822159B (en) Digital twin workshop rapid modeling method for dynamic and static fusion of man-machine environment
CN116243623B (en) Robot scene simulation method applied to digital robot industrial chain
Drofova et al. Use of scanning devices for object 3D reconstruction by photogrammetry and visualization in virtual reality
CN101414380B (en) Method for calibrating simple graph of panorama camera
CN116246142A (en) Three-dimensional scene perception method for multi-sensor data fusion requirement
Molnár et al. ToFNest: Efficient normal estimation for time-of-flight depth cameras
CN114792343A (en) Calibration method of image acquisition equipment, and method and device for acquiring image data
CN114154032A (en) Three-dimensional visual three-dimensional inspection method, system and device for transformer substation and storage medium
CN115527008A (en) Safety simulation experience training system based on mixed reality technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination