CN117635728A - Method for realizing digital twin by utilizing network camera based on BIM technology - Google Patents
Method for realizing digital twin by utilizing network camera based on BIM technology Download PDFInfo
- Publication number
- CN117635728A CN117635728A CN202311597333.2A CN202311597333A CN117635728A CN 117635728 A CN117635728 A CN 117635728A CN 202311597333 A CN202311597333 A CN 202311597333A CN 117635728 A CN117635728 A CN 117635728A
- Authority
- CN
- China
- Prior art keywords
- camera
- picture
- bim
- dimensional point
- point set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000005516 engineering process Methods 0.000 title claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims abstract description 38
- 238000006073 displacement reaction Methods 0.000 claims abstract description 14
- 239000000523 sample Substances 0.000 claims abstract description 12
- 238000009877 rendering Methods 0.000 claims description 7
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 6
- 238000003384 imaging method Methods 0.000 description 6
- 238000010276 construction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000001179 sorption measurement Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a method for realizing digital twin by utilizing a network camera based on BIM technology, which comprises the following steps: calibrating a camera to determine camera internal parameters; removing distortion of a camera picture; selecting at least 5 pairs of two-dimensional points and three-dimensional points which are in one-to-one correspondence to obtain a two-dimensional point set and a three-dimensional point set; calculating a rotation matrix and a displacement matrix of the camera relative to a BIM model space coordinate system by adopting a SolvePnP function according to the camera internal parameters, the two-dimensional point set and the three-dimensional point set; storing camera internal parameters and camera external parameters, wherein the camera external parameters comprise camera IDs, projects, scene IDs, the rotation matrix and the displacement matrix; superposing and aligning the BIM model and the camera video picture according to the camera internal parameters and the camera external parameters; the AR environment probe system interacts with the overlay through an interface. The invention adds BIM scene in the network camera picture, and can also realize interaction with the network camera picture.
Description
Technical Field
The invention relates to the technical field of digital twinning, in particular to a method for realizing digital twinning by utilizing a network camera based on a BIM technology.
Background
Currently, AR/MR technology is mainly applied to mobile devices, ipad, head-mounted display devices, cell phones, etc., and no AR solution based on streaming video streams (webcams) is provided by the substantially well-known AR/MR engine.
There are a large number of network cameras at present, and the picture can only be watched, and other functions are not available.
Therefore, it is necessary to provide a method for implementing digital twin by using a network camera based on the BIM technology, and adding a BIM scene into a network camera picture, so as to implement interaction with the network camera picture.
Disclosure of Invention
The invention aims to provide a method for realizing digital twin by utilizing a network camera based on BIM technology, which adds BIM scenes into a network camera picture and can realize interaction with the network camera picture.
In order to solve the problems in the prior art, the invention provides a method for realizing digital twin by utilizing a network camera based on BIM technology, which comprises the following steps:
calibrating a camera to determine camera internal parameters;
removing distortion of a camera picture;
selecting at least 5 pairs of two-dimensional points and three-dimensional points which are in one-to-one correspondence to obtain a two-dimensional point set and a three-dimensional point set;
calculating a rotation matrix and a displacement matrix of the camera relative to a BIM model space coordinate system by adopting a SolvePnP function according to the camera internal parameters, the two-dimensional point set and the three-dimensional point set;
storing camera internal parameters and camera external parameters, wherein the camera external parameters comprise camera IDs, projects, scene IDs, the rotation matrix and the displacement matrix;
superposing and aligning the BIM model and the camera video picture according to the camera internal parameters and the camera external parameters;
the AR environment probe system interacts with the overlay through an interface.
Optionally, in the method for implementing digital twin by using a network camera based on the BIM technology, a camera calibration method of a single-plane checkerboard is adopted to perform camera calibration.
Optionally, in the method for implementing digital twin by using a webcam based on the BIM technology, the mode of removing distortion of the camera picture is as follows:
corresponding each pixel on the image after distortion correction to the pixel position on the image before distortion removal;
and obtaining the pixel value of each pixel by bilinear interpolation calculation.
Optionally, in the method for implementing digital twin by using a webcam based on the BIM technology, the manner of overlapping and aligning the BIM model and the camera video frame is as follows:
placing the video picture of the camera to a scene space position through the rotation matrix and the displacement matrix;
and overlapping the BIM model with the camera video picture after rendering the BIM model, so that the video picture is overlapped with the picture of the BIM model.
Optionally, in the method for implementing digital twin by using a webcam based on the BIM technology, the manner of interaction between the AR environment probe system and the superimposed screen through an interface is as follows:
capturing a superimposed image by an environment detector of the AR environment probe system;
and organizing the superimposed pictures into an environmental texture;
rendering using the environmental texture to match the overlay to a real world environment;
and interacting with the superimposed picture through the AR environment probe system.
Compared with the prior art, the invention has the following advantages:
1) And acquiring the position information of the real camera in the BIM coordinate space by acquiring the corresponding point of the camera image and the BIM model space reference point.
2) And the BIM three-dimensional scene is aligned with the camera picture in visual space, so that the digital information contained in the camera picture is improved.
3) Man-machine interaction is added, and a digital twin effect is achieved.
Drawings
FIG. 1 is a flow chart of a method for implementing digital twinning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a camera imaging principle model according to an embodiment of the present invention;
FIG. 3 is a schematic view of image distortion according to an embodiment of the present invention;
fig. 4 is a corresponding diagram of two-dimensional point and three-dimensional point coordinates according to an embodiment of the present invention.
Detailed Description
Specific embodiments of the present invention will be described in more detail below with reference to the drawings. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the invention.
Hereinafter, if a method described herein includes a series of steps, the order of the steps presented herein is not necessarily the only order in which the steps may be performed, and some of the described steps may be omitted and/or some other steps not described herein may be added to the method.
There are a large number of network cameras at present, and the picture can only be watched, and other functions are not available.
In order to solve the problems in the prior art, the invention provides a method for realizing digital twin by using a network camera based on BIM technology, as shown in figure 1, the method comprises the following steps:
s1: calibrating a camera to determine camera internal parameters; the camera is a part of the camera.
Specifically, camera calibration refers to establishing a relation between a camera image pixel position and a scene point position, and solving parameters of a camera model according to a camera imaging model by a corresponding relation between coordinates of feature points in an image and world coordinates.
The pinhole camera imaging principle is to convert real three-dimensional world coordinates into two-dimensional camera coordinates by projection, and a model schematic diagram of the pinhole camera imaging principle is shown in fig. 2. As can be seen from fig. 2, a point on a straight line in world coordinates presents only one point on the camera, where very large changes occur, while also losing much important information, which is the focus and difficulty of the 3D reconstruction, object detection and recognition field. In practice, lenses are not ideal for perspective imaging, with varying degrees of distortion. The distortion of the lens includes radial distortion and tangential distortion in theory.
Since the degree of distortion varies from lens to lens during production and assembly, such lens distortion can be corrected by camera calibration, resulting in a corrected image (i.e., corrected lens distortion).
And (5) performing camera calibration by adopting a camera calibration method of a single-plane checkerboard. In the camera calibration method of the single-plane checkerboard, the checkerboard used for calibration is a plane pi in a three-dimensional scene, an image of the checkerboard in an imaging plane is another plane pi, coordinates of corresponding points in the two planes are obtained, and a homography matrix H of the two planes is obtained through solving. Wherein the calibrated checkerboard is specially made, and the coordinates of the corner points are known; the corner points in the image can be obtained through a corner point extraction algorithm (such as Harris corner points), so that homography matrixes H of a chessboard plane pi and an image plane pi can be obtained.
S2: as shown in fig. 3, the first image in fig. 3 is an image after de-distortion, and the second and third images in fig. 3 are images before de-distortion. The image distortion is caused by distortion caused by the lens manufacturing precision and the deviation of the assembly process, so that the original image is distorted, the picture cannot be aligned with the BIM scene rendered by the 3D engine, and in order to align the camera picture and the BIM scene rendering picture, the camera picture needs to be subjected to de-distortion treatment.
When a high-quality de-distorted image is desired, each pixel on the image after distortion correction is corresponding to a pixel position on the image before distortion removal; and obtaining the pixel value of each pixel by bilinear interpolation calculation. This process requires mapping from de-distorted image coordinates to pixel locations of the original image: acquiring a point on the image after distortion correction; firstly, calculating back to a normalization plane through internal parameters of a camera; calculating a return normalization coordinate before de-distortion by using a camera distortion parameter; and finally multiplying the internal reference matrix to obtain the pixel position (namely the pixel position before distortion) of the point corresponding to the distortion chart.
S3: preferably, a two-dimensional point which is better to be recognized, such as a sharp corner of a component, is selected from the video stream picture. In the BIM model scene, the vertex adsorption is utilized to select the corresponding BIM model three-dimensional point. Selecting at least 5 pairs of two-dimensional points and three-dimensional points which are in one-to-one correspondence to obtain a two-dimensional point set and a three-dimensional point set; the more the number of corresponding points, the more accurate the solution.
S4: calculating a rotation matrix and a displacement matrix of the camera relative to a BIM model space coordinate system by adopting a SolvePnP function according to the camera internal parameters, the two-dimensional point set and the three-dimensional point set;
specifically, referring to fig. 4, given a matching point pair: n three-dimensional point coordinates in the BIM scene coordinate system (OwXwYwZw in the figure) and two-dimensional point coordinates corresponding thereto in the network camera image coordinate system (ouv in the figure).
S41: obtaining a pose transformation relation between a world coordinate system OXwYwZw and a camera coordinate system OXcXcYcZc through camera internal parameters:
R c a matrix required to represent the rotation of the camera to the current pose in the world coordinate system; c denotes the position of the camera center in the world coordinate system.
S42: according to the camera internal parameters given in the S1 and the two-dimensional point set and the three-dimensional point set given in the S3, calculating a rotation matrix and a displacement matrix of the camera relative to a BIM model space coordinate system by adopting a SolvePnP function;
the three-dimensional point homogeneous sitting marks under the world coordinate system and the two-dimensional point homogeneous sitting marks under the image coordinate system are respectively as follows:
the three-dimensional point-to-two-dimensional point projection can be expressed as:
wherein λ is the optical axis of the camera, perpendicular to the image plane, containing (f x ,f y ,c x ,c y ) The matrix is the reference matrix of the camera, although R c |C]Has 6 degrees of freedom, R c There are 9 parameters, but only 3 degrees of freedom, because the rotation matrix has orthogonal constraints. In DLT algorithm, R is ignored first c According to the orthogonal constraint of [ R ] c |C]There are 12 unknown parameters x= [ a ] 1 ,a 2 ,…,a 12 ] T The above equation can be calculated as:
unfolding, eliminating lambda and writing into a matrix form as follows:
from the above derivation, 1 pair of three-dimensional points and two-dimensional point pairs provides two equations, i.e., the above equation. When the point logarithm n is equal to or greater than 5, an equation is generated: ax=0, where a and x represent the two matrices of the above formula, respectively, and a has a size of 2n×12. This equation does not solve for exactly, but can get a least squares solution argmin Ax under the constraint of |x|=1 2 . Specifically, SVD decomposition is performed on A to obtain [ U ΣV ]]=SVD(A);
The next column x of the set of V matrices is the solution of x in ax=0. The result of the solution is not scale, that is to say the actual solution is:wherein beta is a proportionality coefficient, ">
The rotating part is as follows:
which is an orthogonal matrix with scale, is SVD decomposed for optimal rotation matrixThe optimal rotation matrix is: r= ±uv T T is a translation matrix; theoretically, the diagonal lines of Σ should be very similar, the average value is taken, and the solution to obtain the proportionality coefficient is: beta= ±1/(tr (Σ)/3);
with the addition of a constraint, the three-dimensional point should be in front of the camera:
the + -sign of beta and R can be determined. Then, a displacement matrix is obtained:
the rotation matrix and displacement matrix of the camera relative to the BIM model space coordinate system are solved by SolvePnP (deduction of the steps).
S5: storing camera internal parameters and camera external parameters, wherein the camera internal parameters are the same type of camera using the same type of internal parameter data according to the brand and model of the camera; camera external parameters include camera ID, item, scene ID, the rotation matrix and the displacement matrix;
s6: placing the video picture of the camera to a scene space position through the rotation matrix and the displacement matrix; and overlapping the BIM model with the camera video picture after rendering the BIM model, so that the video picture is overlapped with the picture of the BIM model.
S7: the AR environment probe system interacts with the superimposed picture through an interface as follows:
capturing a superimposed image by an environment detector of the AR environment probe system;
and organizing the superimposed pictures into an environmental texture;
rendering using the environmental texture to match the overlay to a real world environment;
and interacting with the superimposed picture through the AR environment probe system.
The method for realizing digital twin by utilizing the network camera based on the BIM technology provided by the invention can be used for expanding a plurality of application scenes, such as:
a. clicking the object in the picture, actually clicking the superposition model in the hiding, acquiring the component information in the superposition model, and displaying the component information in the picture.
b. And hiding engineering display, namely displaying the model in the wall body in a hole digging or model hiding mode, and checking things which cannot be seen by a real picture.
c. The construction period scene can be applied to check, in the construction engineering, the construction period completion condition is reflected according to the change of the site and the gap before the model is completed; complex building subsection completion steps may also be reviewed.
In summary, compared with the prior art, the invention has the following advantages:
1) And acquiring the position information of the real camera in the BIM coordinate space by acquiring the corresponding point of the camera image and the BIM model space reference point.
2) And the BIM three-dimensional scene is aligned with the camera picture in visual space, so that the digital information contained in the camera picture is improved.
3) Man-machine interaction is added, and a digital twin effect is achieved.
The foregoing is merely a preferred embodiment of the present invention and is not intended to limit the present invention in any way. Any person skilled in the art will make any equivalent substitution or modification to the technical solution and technical content disclosed in the invention without departing from the scope of the technical solution of the invention, and the technical solution of the invention is not departing from the scope of the invention.
Claims (5)
1. The method for realizing digital twin by utilizing the network camera based on the BIM technology is characterized by comprising the following steps of:
calibrating a camera to determine camera internal parameters;
removing distortion of a camera picture;
selecting at least 5 pairs of two-dimensional points and three-dimensional points which are in one-to-one correspondence to obtain a two-dimensional point set and a three-dimensional point set;
calculating a rotation matrix and a displacement matrix of the camera relative to a BIM model space coordinate system by adopting a SolvePnP function according to the camera internal parameters, the two-dimensional point set and the three-dimensional point set;
storing camera internal parameters and camera external parameters, wherein the camera external parameters comprise camera IDs, projects, scene IDs, the rotation matrix and the displacement matrix;
superposing and aligning the BIM model and the camera video picture according to the camera internal parameters and the camera external parameters;
the AR environment probe system interacts with the overlay through an interface.
2. The method for realizing digital twin by using a network camera based on BIM technology as claimed in claim 1, wherein the camera calibration is performed by adopting a single plane checkerboard camera calibration method.
3. The method for implementing digital twin by using a webcam based on the BIM technology as claimed in claim 1, wherein the method for removing distortion of the camera picture is as follows:
corresponding each pixel on the image after distortion correction to the pixel position on the image before distortion removal;
and obtaining the pixel value of each pixel by bilinear interpolation calculation.
4. The method for implementing digital twin by using a webcam based on the BIM technique as claimed in claim 1, wherein the manner of superimposing and aligning the BIM model and the camera video frame is as follows:
placing the video picture of the camera to a scene space position through the rotation matrix and the displacement matrix;
and overlapping the BIM model with the camera video picture after rendering the BIM model, so that the video picture is overlapped with the picture of the BIM model.
5. The method for implementing digital twin by using a webcam based on the BIM technique as claimed in claim 1, wherein the AR environment probe system interacts with the overlay picture through an interface as follows:
capturing a superimposed image by an environment detector of the AR environment probe system;
and organizing the superimposed pictures into an ambient texture:
rendering using the environmental texture to match the overlay to a real world environment;
and interacting with the superimposed picture through the AR environment probe system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311597333.2A CN117635728A (en) | 2023-11-27 | 2023-11-27 | Method for realizing digital twin by utilizing network camera based on BIM technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311597333.2A CN117635728A (en) | 2023-11-27 | 2023-11-27 | Method for realizing digital twin by utilizing network camera based on BIM technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117635728A true CN117635728A (en) | 2024-03-01 |
Family
ID=90022794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311597333.2A Pending CN117635728A (en) | 2023-11-27 | 2023-11-27 | Method for realizing digital twin by utilizing network camera based on BIM technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117635728A (en) |
-
2023
- 2023-11-27 CN CN202311597333.2A patent/CN117635728A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111062873B (en) | Parallax image splicing and visualization method based on multiple pairs of binocular cameras | |
US9195121B2 (en) | Markerless geometric registration of multiple projectors on extruded surfaces using an uncalibrated camera | |
CN110809786B (en) | Calibration device, calibration chart, chart pattern generation device, and calibration method | |
CN101673395B (en) | Image mosaic method and image mosaic device | |
US20180122099A1 (en) | Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof | |
WO2007084267A2 (en) | A method for rectifying stereoscopic display systems | |
SG176327A1 (en) | A system and method of image processing | |
Yan et al. | Stereoscopic image stitching based on a hybrid warping model | |
US11812009B2 (en) | Generating virtual reality content via light fields | |
JP6683307B2 (en) | Optimal spherical image acquisition method using multiple cameras | |
CN110874854B (en) | Camera binocular photogrammetry method based on small baseline condition | |
CN111461963B (en) | Fisheye image stitching method and device | |
Sajadi et al. | Autocalibrating tiled projectors on piecewise smooth vertically extruded surfaces | |
CN108470324A (en) | A kind of binocular stereo image joining method of robust | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
KR100614004B1 (en) | An automated method for creating 360 degrees panoramic image | |
CN112995638A (en) | Naked eye 3D acquisition and display system and method capable of automatically adjusting parallax | |
Yu et al. | Calibration for camera–projector pairs using spheres | |
US20090059018A1 (en) | Navigation assisted mosaic photography | |
Portalés et al. | An efficient projector calibration method for projecting virtual reality on cylindrical surfaces | |
CN108898550B (en) | Image splicing method based on space triangular patch fitting | |
Kawakita et al. | Projection‐type integral 3‐D display with distortion compensation | |
CN117218320B (en) | Space labeling method based on mixed reality | |
CN117635728A (en) | Method for realizing digital twin by utilizing network camera based on BIM technology | |
Gurrieri et al. | Stereoscopic cameras for the real-time acquisition of panoramic 3D images and videos |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |