CN118172488A - Three-dimensional reconstruction method based on low texture region - Google Patents

Three-dimensional reconstruction method based on low texture region Download PDF

Info

Publication number
CN118172488A
CN118172488A CN202410367637.8A CN202410367637A CN118172488A CN 118172488 A CN118172488 A CN 118172488A CN 202410367637 A CN202410367637 A CN 202410367637A CN 118172488 A CN118172488 A CN 118172488A
Authority
CN
China
Prior art keywords
low
low texture
texture region
image
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410367637.8A
Other languages
Chinese (zh)
Inventor
何明耘
杨久玲
匡平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202410367637.8A priority Critical patent/CN118172488A/en
Publication of CN118172488A publication Critical patent/CN118172488A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional reconstruction method based on a low texture region, which comprises the following steps: denoising pretreatment is carried out on the image; identifying a low texture region of the image by means of local contrast analysis; performing stereo matching on the identified non-low texture areas to determine the matching relation of corresponding points between images; performing geometric plane fitting on the identified low-texture areas to obtain planes capable of being attached to the low-texture areas; and combining a result of stereo matching and a result of geometric plane fitting to realize mapping from pixels to space points and construct a three-dimensional model of the whole scene. According to the invention, the reconstruction effect of PATCHMATCH STEREO in the low-texture area is improved by using simple geometric plane fitting, the reconstruction precision of the low-texture area is improved, and the geometric plane fitting in the low-texture area can infer the plane structure of the areas by analyzing the geometric information of the adjacent areas, so that the reconstruction precision is improved.

Description

Three-dimensional reconstruction method based on low texture region
Technical Field
The invention relates to the field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction method based on a low texture region.
Background
Three-dimensional reconstruction is a technique that creates a three-dimensional model of an object or scene by analyzing two-dimensional images or scan data. This process is widely used in a number of fields including computer vision, medical imaging, robotics, game development and movie production.
In performing three-dimensional reconstruction, two-dimensional image or scan data about an object or scene is typically first collected, which may come from different angles and positions to ensure that sufficient information is obtained to generate an accurate three-dimensional model. These data are then processed using special algorithms that can identify and match feature points between different images to infer the three-dimensional structure of the object or scene. The end result is a digitized three-dimensional model that can be used for a variety of applications such as virtual reality experience, visualization of architectural designs, digital protection of historic remains, and even previewing of complex operations. With the progress of technology, three-dimensional reconstruction becomes more accurate and efficient, and new research and application fields are opened.
PATCHMATCH STEREO is a stereo matching technique, designed specifically for handling inclined surfaces. It improves one of the main limitations in the conventional approach: i.e. it is assumed that all pixels within the support window share the same disparity, which is not true on inclined surfaces. To overcome this problem PATCHMATCH STEREO estimates a separate 3D plane for each pixel and projects the support area onto this plane, thereby more accurately expressing the inclined surface. The challenge with this approach is to determine the optimal 3D plane for each pixel. PATCHMATCH STEREO find the approximate nearest neighbor plane by extending PATCHMATCH algorithm, and also introduce view propagation and time propagation techniques, as well as adaptive support weights to improve the results at the disparity boundaries. The technology can effectively process shielding and large-area non-texture areas, realizes high-precision reconstruction of the inclined surface, and has sub-pixel-level fineness. PATCHMATCH STEREO shows excellent performance in stereo matching technology, especially in local methods.
Although PATCHMATCH STEREO is excellent in processing the inclined surface and improving accuracy at the parallax boundary, there is a certain disadvantage in reconstructing the low texture region. The low texture regions present challenges to the accuracy of the matching algorithm due to the lack of obvious visual features. PATCHMATCH STEREO rely on visual similarity between pixels to infer a 3D plane, but in areas where texture is sparse or missing, this approach may be difficult to accurately estimate disparity. Thus, in these regions, the algorithm may not accurately reconstruct the 3D structure, resulting in a discontinuity or reduced accuracy of the reconstruction result. Although PATCHMATCH STEREO partially alleviates this problem by introducing adaptive support weights and multi-view information, the challenge of reconstruction quality is still faced in an under-textured environment.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a three-dimensional reconstruction method based on a low texture region.
The aim of the invention is realized by the following technical scheme:
in a first aspect of the present invention, a three-dimensional reconstruction method based on a low texture region is provided, comprising the steps of:
Denoising pretreatment is carried out on the image;
identifying a low texture region of the image by means of local contrast analysis;
performing stereo matching on the identified non-low texture areas to determine the matching relation of corresponding points between images;
Performing geometric plane fitting on the identified low-texture areas to obtain planes capable of being attached to the low-texture areas;
and combining a result of stereo matching and a result of geometric plane fitting to realize mapping from pixels to space points and construct a three-dimensional model of the whole scene.
Further, the denoising preprocessing for the image includes:
Performing self-adaptive mean filtering processing on the image, and adjusting pixel values by calculating the average intensity of the local area of the image;
performing self-adaptive median filtering processing on the image, and adjusting pixel values by calculating a median value in a pixel neighborhood;
The image is subjected to an adaptive weighted filtering process by combining the weighted averages of surrounding pixels and adjusting the pixel values based on the similarity or spatial proximity between the pixels.
Further, the identifying the low texture region of the image by means of local contrast analysis includes:
Calculating the gray value difference between each pixel point and other pixels in the neighborhood window to obtain local contrast;
And carrying out statistical analysis on the calculated local contrast, and when the local contrast of the first proportion of one region is smaller than a first threshold value, indicating that the region is a low texture region.
Further, the method further comprises the following steps after identifying the low texture region of the image:
analyzing the neighboring areas of the low texture region to include characteristic information of color and depth, and further confirming the boundary of the low texture region.
Further, the analyzing the neighboring area of the low texture area includes feature information of color and depth, further confirming the boundary of the low texture area, including:
Defining the adjacent area of the pixel point in each low texture area;
Calculating the color gradient and depth gradient of pixels in the adjacent area;
and analyzing the calculated color gradient and depth gradient to further confirm the boundary of the low texture region.
Further, the stereo matching of the identified non-low texture region to determine a matching relationship of corresponding points between images includes:
For corresponding positions in the two images, selecting matching blocks B 1 and B 2;
For selected matching blocks B 1 and B 2, computing pixel differences;
By minimizing the pixel differences, the best matching point between the two images is determined.
Further, the geometric plane fitting is performed on the identified low-texture regions to obtain a plane capable of fitting the low-texture regions, including:
Calculating a plane equation By using a least square method, namely solving parameters A, B, C and D in ax+by+Cz+D=0, wherein x, y and z represent three-dimensional coordinates of data points of the collected low texture region; thereby yielding a plane that best fits the low texture region data points.
Further, the value of parameter C is preset to-1.
Further, fitting the original data points by using the calculated plane equation, and checking the fitting effect; the quality of the fit is measured by the root mean square error RMSE.
Further, the combining of the stereo matching result and the geometric plane fitting result realizes mapping from pixels to space points, and builds a three-dimensional model of the whole scene, including:
And obtaining a depth map of the whole scene through stereo matching. Obtaining a plane of the low texture area, namely an average depth value of the plane;
in a depth map of the whole scene, aiming at pixels of a low texture region, comparing and fusing depth information generated by stereo matching with a geometric plane fitting result;
If stereo matching provides depth values in the low texture region, but there is a significant deviation of these values from the depth of the geometric plane fit, the values may be corrected or replaced with the depth of the plane fit to ensure depth continuity and consistency; in areas where there is no stereo matching depth information, these blanks are filled directly with depth values of the geometric plane fit.
The beneficial effects of the invention are as follows:
In an exemplary embodiment of the present invention, using a simple geometric planar fit to improve PATCHMATCH STEREO the reconstruction effect in the low-texture region increases the reconstruction accuracy of the low-texture region: in low texture regions, it may be difficult for conventional PATCHMATCH STEREO to accurately estimate disparity, while geometric plane fitting may infer the planar structure of neighboring regions by analyzing the geometric information of these regions, thereby improving the accuracy of the reconstruction.
Drawings
Fig. 1 is a flowchart of a three-dimensional reconstruction method based on a low texture region according to an exemplary embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully understood from the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated as being "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are directions or positional relationships described based on the drawings are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements to be referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Referring to fig. 1, fig. 1 shows a flowchart of a three-dimensional reconstruction method based on a low texture region according to an exemplary embodiment of the present invention, including the steps of:
Denoising pretreatment is carried out on the image;
identifying a low texture region of the image by means of local contrast analysis;
performing stereo matching on the identified non-low texture areas to determine the matching relation of corresponding points between images;
Performing geometric plane fitting on the identified low-texture areas to obtain planes capable of being attached to the low-texture areas;
and combining a result of stereo matching and a result of geometric plane fitting to realize mapping from pixels to space points and construct a three-dimensional model of the whole scene.
Specifically, in the present exemplary embodiment, the technical problem that in the three-dimensional reconstruction process of the prior art (PATCHMATCH STEREO), a low-texture region is difficult to accurately reconstruct due to lack of sufficient visual characteristics is solved. Therefore, the invention provides an optimization scheme combining the traditional stereo matching algorithm and the geometric plane fitting technology, so as to improve the reconstruction quality and efficiency of the low texture region:
Firstly, denoising preprocessing is carried out on an image, the image quality is improved while the image is excessively smoothed, and more accurate input data is provided for three-dimensional reconstruction; then, identifying low texture regions of the image by means of local contrast analysis, such regions requiring special treatment in three-dimensional reconstruction to ensure accuracy and quality of the reconstruction; then, three-dimensional matching is carried out on the identified non-low texture areas so as to determine the matching relation of corresponding points among the images, and the best matching point is found by comparing the similarity of corresponding blocks in different images; then, performing geometric plane fitting on the identified low-texture areas to obtain planes capable of being attached to the low-texture areas, namely accurately estimating parameters of a plane equation, so as to obtain a plane capable of being optimally attached to data points of the low-texture areas; finally, the subsequent step PATCHMATCH STEREO is continued through the mapping of pixels to spatial points, constructing a three-dimensional model.
Thus, a solution that uses simple geometric planar fits to improve PATCHMATCH STEREO the reconstruction effect in the low-texture region has the following advantages:
(1) And the reconstruction precision of the low texture region is improved: in low texture regions, conventional PATCHMATCH STEREO may be difficult to accurately estimate the disparity, while geometric plane fitting may infer the planar structure of neighboring regions by analyzing the geometric information of these regions, thereby improving the accuracy of reconstruction;
(2) Simplifying the calculation process: for planes that are smooth and geometrically distinct, using a geometric fitting method can simplify the computation process, as it reduces reliance on complex texture analysis.
(3) And the quality of the whole model is improved: by combining the geometric fit with PATCHMATCH STEREO outputs, better consistency and smooth transitions can be achieved in the overall 3D model, especially at object edges and plane interfaces.
(4) The adaptability is strong: the method has good adaptability, and the fitting strategy can be flexibly adjusted according to different scenes and object features.
(5) Enhancing the handling ability for specific surfaces: for example, geometric fitting may be more efficient than traditional texture matching methods when dealing with smooth, reflective, or other special surfaces.
More preferably, in an exemplary embodiment, the denoising preprocessing of the image includes:
Performing self-adaptive mean filtering processing on the image, and adjusting pixel values by calculating the average intensity of the local area of the image;
performing self-adaptive median filtering processing on the image, and adjusting pixel values by calculating a median value in a pixel neighborhood;
The image is subjected to an adaptive weighted filtering process by combining the weighted averages of surrounding pixels and adjusting the pixel values based on the similarity or spatial proximity between the pixels.
Specifically, in the present exemplary embodiment, in order to optimize the three-dimensional reconstruction effect, adaptive mean filtering, adaptive median filtering, and adaptive weighting filtering are employed for image preprocessing. Wherein:
(1) The self-adaptive mean filtering adjusts pixel values by calculating the average intensity of the local area of the image, effectively smoothing the image and reducing noise, and the formula is as follows:
f(x,y)=f(x,y)+k·(mL-f(x,y))
Where f (x, y) is the value of the current pixel, m L is the average intensity of the pixel values in the local area, and k is an adjustment factor.
(2) The adaptive median filtering is focused on removing salt and pepper noise, while maintaining the image edges, by calculating the median in the pixel neighborhood. This process includes choosing an initial window size, calculating a median value Z med, a minimum value Z min, and a maximum value Z max within the window, and adjusting the pixel values based on these values.
(3) The self-adaptive weighted filtering combines the weighted average value of surrounding pixels, adjusts the pixel value according to the similarity or the spatial proximity between the pixels, reduces noise and simultaneously maintains the continuity of the image structure. The formula of the formula is shown as follows,
Where W is the filter window, g (i, j) is the value of the pixel within the window, and ω (i, j) is the weight.
The common goal of these methods is to improve image quality while not excessively smoothing the image, providing more accurate input data for three-dimensional reconstruction.
More preferably, in an exemplary embodiment, the identifying the low texture region of the image by means of local contrast analysis includes:
Calculating the gray value difference between each pixel point and other pixels in the neighborhood window to obtain local contrast;
And carrying out statistical analysis on the calculated local contrast, and when the local contrast of the first proportion of one region is smaller than a first threshold value, indicating that the region is a low texture region.
Specifically, in the present exemplary embodiment, in order to identify a low texture region, a manner based on local contrast analysis is proposed. This way texture information is identified by analyzing the gray value differences between each pixel point in the image and other pixels in its neighborhood. Wherein:
The implementation steps are as follows:
(1) Selecting a local neighborhood: for each pixel point (x, y) in the image, a neighborhood window W centered on the pixel is defined.
(2) Calculating local contrast: in this step, we calculate for each pixel point (x, y) the local contrast in its neighborhood, using a variance formula, which is,
Where W is a neighborhood window centered around pixel (x, y), L (i, j) is the gray value of the pixel within the window, μ (x, y) is the average gray value within W, σ 2 (x, y) is the variance, representing the local contrast.
(3) Local contrast value analysis: we analyze the calculated local contrast values in this step. When the local contrast of a region is generally low (i.e., when the local contrast of a first proportion of a region is less than a first threshold), this region is indicated as a low texture region.
More preferably, in an exemplary embodiment, the method further comprises, after identifying the low texture region of the image, the steps of:
analyzing the neighboring areas of the low texture region to include characteristic information of color and depth, and further confirming the boundary of the low texture region.
Specifically, in the present exemplary embodiment, in order to further determine the range of the low texture region, a manner based on proximity data analysis is proposed. The texture rich areas surrounding these low texture areas generally provide efficient information about the plane direction and position. This process involves analyzing color and depth information of areas adjacent to low texture areas in the image.
More preferably, in an exemplary embodiment, the analyzing the neighboring area of the low texture region includes feature information of color and depth, further confirming the boundary of the low texture region, including:
Defining the adjacent area of the pixel point in each low texture area;
Calculating the color gradient and depth gradient of pixels in the adjacent area;
and analyzing the calculated color gradient and depth gradient to further confirm the boundary of the low texture region.
The implementation steps are as follows:
(1) Selecting a neighboring area: for each pixel point (x, y) in the image a neighborhood is defined, including color and depth information.
(2) Color gradient and depth gradient calculations: we will evaluate the color intensity, depth variation and other relevant features of pixels within the neighborhood. For this, we calculate the color gradient and depth gradient, respectively as formulas,
Where Δc x,ΔCy is the change in color intensity in the horizontal and vertical directions, and Δd x,ΔDy is the change in depth value in the horizontal and vertical directions.
(3) Color and depth information analysis: and analyzing the color gradient and the depth gradient obtained by calculation, and evaluating the relevant characteristics such as color intensity, depth change and the like of pixels in the adjacent area.
More preferably, in an exemplary embodiment, the stereo matching of the identified non-low texture region to determine a matching relationship of corresponding points between images includes:
For corresponding positions in the two images, selecting matching blocks B 1 and B 2;
For selected matching blocks B 1 and B 2, computing pixel differences;
By minimizing the pixel differences, the best matching point between the two images is determined.
Specifically, in the present exemplary embodiment, the matching method in PATCHMATCH STEREO is extended, and the similarity of the corresponding blocks in different images is compared to find the best matching point. Specifically, the implementation steps are as follows:
(1) Selecting a matching block: for corresponding positions in the two images, matching blocks B 1 and B 2 are selected.
(2) Calculating pixel differences: for the selected matching blocks B 1 and B 2, the pixel differences are calculated.
The similarity between two blocks is evaluated using the following formula:
Where I 1 and I 2 are the corresponding pixel values in the two images, (x, y) and (x ', y') are the corresponding pixel locations in the matching block, ω (x, y) is a weight function, which can be adjusted as needed to emphasize a particular region or feature.
(3) And (3) determining a best matching point: by minimizing this pixel difference, the best matching point between the two images can be determined. Finding the matching point that minimizes S (B 1,B2) represents the most similar position between the two blocks.
More preferably, in an exemplary embodiment, the geometric plane fitting of the identified low-texture regions to obtain a plane capable of fitting the low-texture regions includes:
Calculating a plane equation By using a least square method, namely solving parameters A, B, C and D in ax+by+Cz+D=0, wherein x, y and z represent three-dimensional coordinates of data points of the collected low texture region; thereby yielding a plane that best fits the low texture region data points.
Specifically, in the present exemplary embodiment, by applying the least squares method, we can accurately estimate the parameters of the plane equation, thereby obtaining a plane that best fits the low texture region data points.
The specific flow of the steps is as follows:
(1) Collecting data points: a series of data points for a low texture region are collected from three-dimensional space. Each data point has an (x, y, z) coordinate.
(2) Constructing a design matrix M and a target vector b: the design matrix M is an N3 matrix, where N is the number of data points. For each data point (x i,yi,zi), a row is filled in M in the form of (x i,yi, 1). The target vector b is an N x 1 vector, where each element is the-z i value of the corresponding data point.
(3) Calculating plane parameters: the parameter vector p= [ a, B, D ] T is calculated using the least squares formula. Wherein, the least square formula is:
p=(MTM)-1MTb
First, the product of the transpose M T of the design matrix and the design matrix M is calculated. Then, the inverse of this product is calculated. Finally, the inverse matrix is multiplied by the product of M T and b to obtain the parameter vector p.
(4) Interpretation results: the solved parameters a, B, D represent the best fit plane ax+by+d=z. For the standard form ax+by+cz+d=0, since we assume c= -1 (the preferred exemplary embodiment), the result is rewritable as ax+by-1z+d=0.
(5) Fitting the original data points by using the calculated plane equation, and checking the fitting effect; the quality of the fit is measured by the root mean square error RMSE (preferred exemplary embodiment).
More preferably, in an exemplary embodiment, the combining the result of stereo matching and the result of geometric plane fitting to implement mapping from pixels to space points, and constructing a three-dimensional model of the entire scene includes:
And after stereo matching, a depth map of the whole scene can be obtained. This depth map may have a higher accuracy in the texture rich regions, but may be inaccurate or missing in the low texture regions. From the geometric plane fitting result, we calculate the three-dimensional coordinates of the data points of the low texture region, and geometric plane equations, each representing an average depth value of all points in the low texture region. In the depth map of the whole scene, aiming at pixels of a low texture region, comparing and fusing depth information generated by stereo matching with a geometric plane fitting result. If stereo matching provides depth values in low texture regions, but these values deviate significantly from the depth of the geometric plane fit, the depth of the plane fit may be used to correct or replace these values to ensure depth continuity and consistency. In areas where there is no stereo matching depth information, these blanks are filled directly with depth values of the geometric plane fit.
The plane equation obtained by calculation is the average value of the depth of the low texture area, the average value is fused with the depth map of the whole scene, if the stereo matching provides depth values in the low texture area, but the values have obvious deviation from the depth of the geometric plane fitting, the values can be corrected or replaced by adopting the depth of the plane fitting, so that the continuity and consistency of the depth are ensured. In areas where there is no stereo matching depth information, these blanks are filled directly with depth values of the geometric plane fit.
It is apparent that the above examples are given by way of illustration only and not by way of limitation, and that other variations or modifications may be made in the various forms based on the above description by those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.

Claims (9)

1. The three-dimensional reconstruction method based on the low texture region is characterized by comprising the following steps of: the method comprises the following steps:
Denoising pretreatment is carried out on the image;
identifying a low texture region of the image by means of local contrast analysis;
performing stereo matching on the identified non-low texture areas to determine the matching relation of corresponding points between images;
Performing geometric plane fitting on the identified low-texture areas to obtain planes capable of being attached to the low-texture areas;
and combining a result of stereo matching and a result of geometric plane fitting to realize mapping from pixels to space points and construct a three-dimensional model of the whole scene.
2. The three-dimensional reconstruction method based on a low texture region according to claim 1, wherein: the denoising preprocessing for the image comprises the following steps:
Performing self-adaptive mean filtering processing on the image, and adjusting pixel values by calculating the average intensity of the local area of the image;
performing self-adaptive median filtering processing on the image, and adjusting pixel values by calculating a median value in a pixel neighborhood;
The image is subjected to an adaptive weighted filtering process by combining the weighted averages of surrounding pixels and adjusting the pixel values based on the similarity or spatial proximity between the pixels.
3. The three-dimensional reconstruction method based on a low texture region according to claim 1, wherein: the method for identifying the low texture region of the image by means of local contrast analysis comprises the following steps:
Calculating the gray value difference between each pixel point and other pixels in the neighborhood window to obtain local contrast;
And carrying out statistical analysis on the calculated local contrast, and when the local contrast of the first proportion of one region is smaller than a first threshold value, indicating that the region is a low texture region.
4. The three-dimensional reconstruction method based on a low texture region according to claim 1, wherein: the method further comprises, after identifying the low texture region of the image, the steps of:
analyzing the neighboring areas of the low texture region to include characteristic information of color and depth, and further confirming the boundary of the low texture region.
5. The three-dimensional reconstruction method based on a low texture region according to claim 4, wherein: the analyzing the neighboring area of the low texture area includes feature information of color and depth, further confirming the boundary of the low texture area, including:
Defining the adjacent area of the pixel point in each low texture area;
Calculating the color gradient and depth gradient of pixels in the adjacent area;
and analyzing the calculated color gradient and depth gradient to further confirm the boundary of the low texture region.
6. The three-dimensional reconstruction method based on a low texture region according to claim 1, wherein: the stereo matching of the identified non-low texture region to determine the matching relationship of the corresponding points between the images comprises the following steps:
For corresponding positions in the two images, selecting matching blocks B 1 and B 2;
For selected matching blocks B 1 and B 2, computing pixel differences;
By minimizing the pixel differences, the best matching point between the two images is determined.
7. The three-dimensional reconstruction method based on a low texture region according to claim 1, wherein: performing geometric plane fitting on the identified low-texture regions to obtain planes capable of being attached to the low-texture regions, wherein the geometric plane fitting comprises the following steps:
Calculating a plane equation By using a least square method, namely solving parameters A, B, C and D in ax+by+Cz+D=0, wherein x, y and z represent three-dimensional coordinates of data points of the collected low texture region; thereby yielding a plane that best fits the low texture region data points.
8. The three-dimensional reconstruction method based on a low texture region according to claim 7, wherein: the value of parameter C is preset to-1.
9. The three-dimensional reconstruction method based on a low texture region according to claim 7, wherein: fitting the original data points by using the calculated plane equation, and checking the fitting effect; the quality of the fit is measured by the root mean square error RMSE.
CN202410367637.8A 2024-03-28 2024-03-28 Three-dimensional reconstruction method based on low texture region Pending CN118172488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410367637.8A CN118172488A (en) 2024-03-28 2024-03-28 Three-dimensional reconstruction method based on low texture region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410367637.8A CN118172488A (en) 2024-03-28 2024-03-28 Three-dimensional reconstruction method based on low texture region

Publications (1)

Publication Number Publication Date
CN118172488A true CN118172488A (en) 2024-06-11

Family

ID=91359856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410367637.8A Pending CN118172488A (en) 2024-03-28 2024-03-28 Three-dimensional reconstruction method based on low texture region

Country Status (1)

Country Link
CN (1) CN118172488A (en)

Similar Documents

Publication Publication Date Title
EP3673461B1 (en) Systems and methods for hybrid depth regularization
CN106780590B (en) Method and system for acquiring depth map
Pham et al. Domain transformation-based efficient cost aggregation for local stereo matching
KR100748719B1 (en) Apparatus and method for 3-dimensional modeling using multiple stereo cameras
CN108596975B (en) Stereo matching algorithm for weak texture region
CN102665086B (en) Method for obtaining parallax by using region-based local stereo matching
CN106408513B (en) Depth map super resolution ratio reconstruction method
CN102436671B (en) Virtual viewpoint drawing method based on depth value non-linear transformation
CN110853151A (en) Three-dimensional point set recovery method based on video
Remondino 3-D reconstruction of static human body shape from image sequence
CN111462030A (en) Multi-image fused stereoscopic set vision new angle construction drawing method
CN112991420A (en) Stereo matching feature extraction and post-processing method for disparity map
CN113887624A (en) Improved feature stereo matching method based on binocular vision
CN117456114A (en) Multi-view-based three-dimensional image reconstruction method and system
CN117372647A (en) Rapid construction method and system of three-dimensional model for building
CN110610503B (en) Three-dimensional information recovery method for electric knife switch based on three-dimensional matching
CN112637582A (en) Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN112381721A (en) Human face three-dimensional reconstruction method based on binocular vision
CN116245928A (en) Three-dimensional reconstruction method based on binocular stereo matching
CN118172488A (en) Three-dimensional reconstruction method based on low texture region
Wu et al. Joint view synthesis and disparity refinement for stereo matching
CN114998532A (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
Lin et al. Interactive disparity map post-processing
CN117474922B (en) Anti-noise light field depth measurement method and system based on inline shielding processing
CN117197215B (en) Robust extraction method for multi-vision round hole features based on five-eye camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination