CN117392319A - Rapid three-dimensional contour model construction method based on monocular vision - Google Patents

Rapid three-dimensional contour model construction method based on monocular vision Download PDF

Info

Publication number
CN117392319A
CN117392319A CN202311402928.8A CN202311402928A CN117392319A CN 117392319 A CN117392319 A CN 117392319A CN 202311402928 A CN202311402928 A CN 202311402928A CN 117392319 A CN117392319 A CN 117392319A
Authority
CN
China
Prior art keywords
dimensional
angle
contour model
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311402928.8A
Other languages
Chinese (zh)
Inventor
李志勇
周舒腾
梁灏
王咏韬
李星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202311402928.8A priority Critical patent/CN117392319A/en
Publication of CN117392319A publication Critical patent/CN117392319A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a quick three-dimensional contour model construction method based on monocular vision, which comprises two parts of rough three-dimensional contour model construction based on local distribution characteristics and angle mapping model optimization, wherein the rough three-dimensional contour model construction based on the local distribution characteristics is used for carrying out splicing and fusion on a limited number of two-dimensional plane images of a limited visual angle of a three-dimensional target object acquired by a monocular camera to generate a rough three-dimensional contour model of the three-dimensional target object, wherein the rough three-dimensional contour model is closed at a visual angle of 360 degrees, and the generated rough three-dimensional contour model is optimized through the angle mapping model optimization to generate a more coherent closed three-dimensional contour model of the three-dimensional target object at the visual angle of 360 degrees.

Description

Rapid three-dimensional contour model construction method based on monocular vision
Technical Field
The invention belongs to the technical field of image processing, relates to an image matching technology, an image plane segmentation technology and an image frame inserting technology, and is suitable for constructing a three-dimensional object contour model.
Background
With the development of network technology, online shopping gradually replaces physical store shopping to become the mainstream. In the purchasing process of the product, the consumer needs to obtain as much information as possible of the commodity to measure whether the commodity meets the requirement standard of the consumer, and meanwhile, the merchant needs to fully display the commodity to the consumer to stimulate the purchasing desire of the consumer. The display mode is an important problem in the process of product propaganda and sales promotion display. So far, a method for showing two virtual reality of a commodity, mainly including a three-dimensional model and a panoramic view, is shown to a user. The panorama is convenient to manufacture, but only allows consumers to observe the internal information of the large commodity or the surrounding environment at the personal view angle, and the appearance of the commodity cannot be freely changed. The three-dimensional model can intuitively display commodities, but the model making process is complex, and high technical software and equipment are required.
In daily life, people can take electronic images or paper photos from nearby environments or objects by using mobile portable equipment such as a camera and a mobile phone. In some environments, an image cannot cover the whole target content and clearly present the whole target content due to various reasons, and then an image stitching technology is needed. In many image stitching methods, transforming an image by a transformation matrix is an inevitable core step. In all image stitching processes, most image transformations are performed by homography. But the homography matrix is used with certain constraints. The homography transformation matrix has better image splicing effect under the following two conditions:
1. the image stitching targets are located in the same plane, and the positions and angles of the two cameras are arbitrary.
2. The image stitching targets are not in the same plane, but the position of the two-time shooting camera is not greatly changed relative to the imaging distance, and the two-time shooting camera can be approximately regarded as that the camera only rotates by the shooting angle and has no position translation.
If the use condition of the homography matrix is not satisfied, larger parallax exists between two images to be spliced, and the problem that double-image ghosting deformation distortion and the like can be generated when the homography matrix is used for splicing the images is solved, so far, no perfect solution exists. The image stitching technology is widely applied, and is mainly applied to panorama stitching in the field of virtual reality (the panorama stitching technology refers to seamless stitching of eighteen photos taken from three hundred, sixty-five degrees around by a camera under any environment, and the stitched image is projected to a cylinder cube or the interior of a sphere to simulate a real environment so as to generate immersion for a viewer). The method is used for realizing view interpolation by using an image stitching method, and generating a three-dimensional contour model of a coherent target object with a target three-dimensional object closed at a view angle of 360 degrees.
Disclosure of Invention
The invention provides a quick three-dimensional contour model construction method based on monocular vision, which comprises two parts of rough three-dimensional contour model construction based on local distribution characteristics and angle mapping model optimization, wherein the rough three-dimensional contour model construction based on the local distribution characteristics is used for carrying out splicing and fusion on a limited number of two-dimensional plane images of a limited visual angle of a three-dimensional target object acquired by a monocular camera to generate a rough three-dimensional contour model of the three-dimensional target object, wherein the rough three-dimensional contour model is closed at a visual angle of 360 degrees, and the generated rough three-dimensional contour model is optimized through the angle mapping model optimization to generate a more coherent closed three-dimensional contour model of the three-dimensional target object at the visual angle of 360 degrees.
The technical scheme adopted by the invention is as follows:
a three-dimensional contour model rapid construction method based on image stitching comprises two parts of rough three-dimensional contour model construction and angle mapping model optimization based on local distribution characteristics.
The construction of the rough three-dimensional contour model based on the local distribution characteristics comprises the following steps:
step A: acquiring two-dimensional plane images which are adjacent and connected and cover 360 degrees of a three-dimensional target object by adopting a monocular camera, extracting and matching the distribution characteristics of the two-dimensional plane images, and forming a matching distribution characteristic set;
and (B) step (B): and dividing the image through the obtained matching distribution feature set, estimating the rotation angle between the adjacent two-dimensional plane images, and constructing a rough three-dimensional contour model of the three-dimensional target object closed at a 360-degree view angle.
The angle mapping model optimization comprises the following steps:
step C: generating an interpolated two-dimensional plane image;
step D: and judging whether the fineness of the three-dimensional contour model meets the requirement.
Drawings
FIG. 1 is a flow chart of the three-dimensional contour model construction of the present invention
FIG. 2 is a schematic diagram of a rough three-dimensional contour model construction
FIG. 3 is an angle mapping model optimization intent
Specific implementation steps
The invention is further described with reference to the drawings and specific implementation steps:
the implementation process of the invention is divided into four parts, namely, three-dimensional contour models after image acquisition, rough three-dimensional contour model construction based on distribution characteristics, angle mapping model optimization and output optimization are respectively carried out by a monocular camera, and a schematic flow diagram is shown in figure 3.
A schematic diagram of a part of a rough three-dimensional contour model construction based on local distribution characteristics is shown in fig. 2, and the specific processing procedure comprising the steps is as follows:
step A: collecting two-dimensional plane images which are adjacent and connected and cover 360 degrees of a three-dimensional target object, extracting the distribution characteristics of the two-dimensional plane images and matching to form a matching part characteristic set, wherein the specific processing procedure of the step A is as follows:
a-1) acquisition of two-dimensional planar images
The invention collects two-dimensional plane images including two types, namely (A-1-i) the two-dimensional plane images of small-size objects are collected
(a-1-ii) two-dimensional planar image acquisition of a large-size object;
a-1-i) two-dimensional planar image acquisition of small-sized objects
For small-size objects, shooting front views of all included angles of the target object in clockwise (anticlockwise) manner, continuously covering all view angles of the target object, and ensuring that overlapping areas exist between the collected front views of adjacent included angles;
a-1-ii) two-dimensional planar image acquisition of large-size objects
For large-size objects, namely, a plurality of continuous included angle front views and plane front views are needed to cover a certain included angle or plane of the object, the included angle front views and the plane front views of the object are continuously shot in clockwise (inverse) directions, and an overlapping area is ensured between the collected adjacent front views;
a-2) extracting distribution characteristics
The invention extracts the distribution characteristic and includes three steps, namely (i) abstracting the image color block and prescribing the color block screening rule (ii) extracting the color block edge point and the image contour point, the contour extraction result and the color block edge point extraction result are mutually corrected (iii) extracting the distribution characteristic of the color block, the distribution characteristic of the color block sub-object, the critical point and the contour distribution characteristic, and connecting the edge points into a line, the concrete steps are as follows:
(A-2-i) image color block abstraction and defining color block screening rules
Image color block abstraction: the human eye visual RGB space image is converted into a machine visual HSV space, a quadtree searching method is used for directionally searching for a color block seed growing point, four corners of a searching frame and a central point pixel are selected as color marking pixels for comparing pixel values, and when the scale of a searching interval is larger than the minimum scale, a plurality of same color marking pixels exist in the searching window, the corresponding interval is subjected to directional searching; when the values of most color marking pixels comprise the same middle color marking pixels, the center color marking pixels are used as seed points for region growth, when no non-grown pixels with the same color are arranged around, the growth is finished, the growing regions are color blocks marked with the same color, and the non-color block regions are all black.
And (3) defining a color block screening rule: according to the normal shooting habit of the shooting habit that the target object is placed in the center of the image, the center point of the color block belonging to the target object is definitely default to be one half width area of the image center image in the image center area, namely:
image.cols/4<center.x<3*image.cals/4
image.rows/4<center.y<3*image.rows/4
where image. Cols and image. Rows represent the width and height of the image, respectively, and center. X and center. Y represent the abscissa and ordinate, respectively, of the center point of the color patch.
And the elements in the color patch should not appear at the edge locations of the image, the edge regions defaulting to one eighth of the width of the image are:
image.cols/8<pixel.x<7*image.cols/8
image.rows/8<pixel.y<7*image.rows/8
pixel. X and pixel. Y represent the abscissa and ordinate, respectively, of the image pixel point.
(A-2-ii) color lump edge point and image contour point extraction, the contour extraction result and the color lump edge point extraction result being corrected to each other
Color block edge point and image contour point extraction: extracting the edge of a sobel operator to obtain a black-and-white image of the edge of the image, and extracting the outline; the color block pixel points with other color pixels in the eight neighborhood are edge points of color blocks, and all the color blocks are traversed to store the edge points of the color blocks; the edge point set of each color block is divided into a plurality of point sets through closed loop judgment, wherein the point set with the largest element number is the contour point set of the color block, other sub-closed point sets are the contour point sets of the sub-object, and non-closed point sets are texture point sets.
The contour extraction result and the color block edge point extraction result are mutually corrected: because noise exists in the sobel operator edge extraction, errors exist in the color block edge in the color block extraction process, the two extraction result point sets are used for performing AND operation in the form of hit rate, only the common closed point set is reserved as the final contour point set, the hit rate type AND operation means that when the pixel point is really operated, the pixel point is reserved in the two point sets, and when the pixel point is found to be fake in the AND operation, other true points exist in the eight neighborhood of the pixel, the true and the fake are selected according to the closed loop degree of the point set.
(A-2-iii) extracting color block distribution characteristics, color block sub-object distribution characteristics, critical points, contour distribution characteristics, connecting edge points into a line
Extracting color block distribution characteristics: the expansion convolution traverses the contour point set of the color block, other color pixels appearing in the contour point expansion convolution correspond to the color block, the color block is the surrounding color block of the color block, the color and the direction of the surrounding color block are the description sub-content of the center point of the color block, the color block distribution characteristics are obtained, and the characteristic set is added.
Extracting color block sub-object distribution characteristics: and calculating the position of the middle pixel point relative to the color block by using the contour point set of each color block sub-object, and adding a feature set.
Extracting critical points: the dilation convolution traverses the set of color patch contour points. When a multi-color pixel exists in expansion convolution of a certain contour point, the contour point is a critical candidate point, a plurality of adjacent candidate points with the same color are taken as critical points, the color, the direction and the direction relative to other critical points of color blocks around the critical point are the descriptive sub-contents of the color blocks, the center point and the critical point of the color block are obtained, and a feature set is added.
Extracting profile distribution characteristics: adding the line concentration line and the connection point of the line into a characteristic point set, and taking the attribute and level_line of the connection point connection section line as a descriptor; the inflection point of the curve is added into the feature point set, the attribute of the connecting line at the two ends of the curve where the inflection point is located is used as a descriptor, and all profile distribution features are added into the feature set according to the clockwise connection sequence.
Connecting the edge points into a line to form a characteristic straight line: and calculating the gradient of pixel points of the contour map according to the LSD algorithm, and generating a level_line. The level_line of each pixel is traversed and considered to be the same line if neighboring pixel level_lines are equal within a threshold. When the pixel level_line is transformed according to a certain angle transformation rule, the pixel points conforming to the transformation rule form a curve.
The gradients in the x-direction and the y-direction are respectively:
the level_line angle is:
a-3) matching distribution characteristics to form a matching distribution characteristic set
And searching and matching the similar features according to the distribution feature vector directions, wherein the largest common sub-graph of the two distribution sub-graphs is a matching result, and all the matching feature points are added into a matching distribution feature set.
And (B) step (B): dividing the image through the obtained matching distribution feature set, estimating the rotation angle between the adjacent two-dimensional plane images, and constructing a rough three-dimensional contour model of the three-dimensional target object closed at the view angle of 360 degrees, wherein the specific processing procedure of the step B is as follows:
b-1) segmenting the image
Comparing the size of the distribution feature vector modes of the matching feature points, wherein the middle matching points of the matching points with different change ratios are edge points, and adjacent edge points are linked to form edges; adjacent non-edge points and adjacent edge points enclose a face. Screening contour points of planes according to the relative variation of the unified distribution characteristics of adjacent two-dimensional plane images under different viewing angles, and connecting the two-dimensional plane images in the plane contour point segmentation images;
b-2) estimating the rotation angle between adjacent two-dimensional planar images
The images of the same two-dimensional plane image under each view angle form an image set of the two-dimensional plane image, the change rule of the width of the two-dimensional plane image in the view plane imaging can be approximated by cosine function, the horizontal characteristic straight line in each plane in the front view of the included angle is detected, and the rotation angle theta between the adjacent two-dimensional plane images is calculated according to the angle and the length of the straight line a(i-1)-a(i)
B-3) constructing a rough three-dimensional contour model of the three-dimensional target object closed at a 360-degree viewing angle
And splicing adjacent two-dimensional plane images according to the matching distribution feature set and the rotation angle between the adjacent two-dimensional planes, and constructing a rough three-dimensional contour model of the target object, which is closed at a view angle of 360 degrees.
The optimization schematic diagram of the angle mapping model is shown in fig. 3, and the specific processing procedure including the steps is as follows:
step C: the specific processing procedure of the step C is as follows:
c-1) calculating the transformation angle of the adjacent two-dimensional plane images
Adjacent two-dimensional planar images I by a three-dimensional contour model a(i-1) 、I a(i) Corresponding to the rotation angle theta a(i-1)-a(i) Calculating the strainAngle alpha of change a(i-1)-a(i) 、α a(i)-a(i-1) The method meets the following conditions:
α a(i-1)-a(i)a(i)-a(i-1) =θ a(i-1)-a(i)
the required transformation angle of two pictures defaults to alpha a(i-1)-a(i) =θ a(i-1)-a(i) /2,α a(i)-a(i-1) =-θ a(i-1)-a(i) /2. The conversion angle limitation requirement is two conversion angles alpha a(i-1)-a(i) 、α a(i)-a(i-1) And is theta a(i-1)-a(i) And a single transformation angle is smaller than theta a(i-1)-a(i)
C-2) calculating a two-dimensional planar image depth map
Replacing the matching point set in the stereo matching with the matching point set to calculate a depth map;
c-3) least squares fitting I a(i-1) 、I a(i) The plane and the curved surface form a fitting surface
The method comprises the steps of taking the center point of an image color block as a starting point region to grow and traverse a depth distribution rule, dividing, fitting a plane and a curved surface by using a least square method, and representing the fitted surface by using a three-dimensional coordinate under a camera coordinate system corresponding to the image;
the plane fitting formula is:
the surface fitting formula is:
c-4) rotation I a(i-1) 、I a(i) Corresponding fitting surface
I a(i-1) 、I a(i) The corresponding fitting surfaces are respectively rotated by corresponding transformation angles alpha a(i-1)-a(i) 、α a(i)-a(i-1) Generate I' a(i-1) 、I′ a(i) Transforming the matrix expression:
c-5) generating an interpolated two-dimensional planar image
Adopt perspective projection matrix to make I' a(i-1) 、I′ a(i) Projecting to the view angle of the frame insertion, fusing the overlapped areas according to the matched distribution feature set, scaling the two-dimensional plane image of the frame insertion, adjusting the distance between the two-dimensional plane image of the frame insertion and the center of the model, and obtaining I' a(i-1) 、I′ a(i) Median correction is carried out when deviation occurs in the overlapping area contrast after projection, and an interpolated two-dimensional plane image I is generated (a+1)(i-1)
C-6) generating a two-dimensional planar image P
Calculation I (a)(i+1) And by I (a+1)(i-1) And repeating the steps (C-1), (C-2), (C-3),
(C-4), (C-5) generating and two-dimensional planar image I a(i) A two-dimensional plane image P with the same view angle in the three-dimensional contour model;
c-7) calculating the error of the interpolated two-dimensional plane image
Calculating a two-dimensional plane image I according to SSIM (structural similarity) indexes a(i) Similarity of two-dimensional planar image P with the same viewing angle as in the three-dimensional contour model:
c-8) judging whether the error meets the requirement
SSIM (x, y) ranges from [ -1,1]The closer the SSIM (x, y) is to 1, the higher the similarity of the two pictures, and when the SSIM (x, y) is greater than a (generally, when a is more than 0.7, the similarity of the two pictures is high, and when a is more than 0.9, the human eyes are difficult to distinguish the two pictures), the generated two-dimensional plane image I is judged (a+1)(i-1) And (3) adding a three-dimensional contour model of the target object according with requirements, otherwise, adjusting parameters such as perspective projection matrix visual angle, projection plane aspect ratio and the like, repeating the steps (C-5), (C-6), (C-7) and (C-8), wherein θ is perspective projection angle, HW is projection plane aspect ratio, n is distance from a near clipping plane to an original point, and f is distance from a far clipping plane to the original point.
Perspective projection matrix:
step D: and D, judging whether the fineness of the three-dimensional contour model meets the requirement, wherein the specific processing procedure of the step D is as follows:
d-1) continuously generating new interpolated two-dimensional plane images, adding the new interpolated two-dimensional plane images into a three-dimensional contour model, and updating a matching feature set and rotation angle information
Continuously generating new interpolated two-dimensional plane images through the step C, and updating the matching feature set and the rotation angle information theta according to the steps (A-2), (A-3), (B-1) and (B-2) ai
D-2) judging that the fineness of the three-dimensional contour model of the target object meets the requirement
The maximum rotation angle of the adjacent two-dimensional plane images in the three-dimensional contour model of the target object is theta max The set maximum rotation angle is theta s When theta is max ≤θ s And when the fineness of the three-dimensional contour model of the target object meets the requirement, stopping generating a new middle visual angle two-dimensional plane image, and outputting the three-dimensional contour model of the target object graph.

Claims (7)

1. The invention provides a quick three-dimensional contour model construction method based on monocular vision, which comprises two parts of rough three-dimensional contour model construction based on local distribution characteristics and angle mapping model optimization, wherein the rough three-dimensional contour model construction based on the local distribution characteristics is used for carrying out splicing and fusion on a limited number of two-dimensional plane images of a limited visual angle of a three-dimensional target object acquired by a monocular camera to generate a rough three-dimensional contour model of the three-dimensional target object, wherein the rough three-dimensional contour model is closed at a visual angle of 360 degrees, and the generated rough three-dimensional contour model is optimized through the angle mapping model optimization to generate a more coherent closed three-dimensional contour model of the three-dimensional target object at the visual angle of 360 degrees.
2. The rough three-dimensional contour model construction based on local distribution features according to claim 1, characterized by comprising the steps of:
step A: acquiring two-dimensional plane images which are adjacent and connected and cover 360 degrees of a three-dimensional target object by adopting a monocular camera, extracting and matching the distribution characteristics of the two-dimensional plane images, and forming a matching distribution characteristic set;
and (B) step (B): and dividing the image through the obtained matching distribution feature set, estimating the rotation angle between the adjacent two-dimensional plane images, and constructing a rough three-dimensional contour model of the three-dimensional target object closed at a 360-degree view angle.
3. The rough three-dimensional contour model construction based on local distribution features according to claim 2, wherein the specific processing procedure of the step a is as follows:
a-1) acquisition of two-dimensional planar images
The invention adopts a monocular camera to collect two-dimensional plane images, which comprises two types, namely (A-1-i) small-size object two-dimensional plane image collection (A-1-ii) large-size object two-dimensional plane image collection;
a-1-i) two-dimensional planar image acquisition of small-sized objects
For small-size objects, shooting front views of all included angles of the target object in clockwise (anticlockwise) manner, continuously covering all view angles of the target object, and ensuring that overlapping areas exist between the collected front views of adjacent included angles;
a-1-ii) two-dimensional planar image acquisition of large-size objects
For large-size objects, namely, a plurality of continuous included angle front views and plane front views are needed to cover a certain included angle or plane of the object, the included angle front views and the plane front views of the object are continuously shot in clockwise (inverse) directions, and an overlapping area is ensured between the collected adjacent front views;
a-2) extracting distribution characteristics
The invention extracts the distribution characteristic and includes three steps, namely (A-2-i) abstracting the image color block and prescribing color block screening rule (A-2-ii) color block edge point and image contour point extraction, the contour extraction result and the color block edge point extraction result are mutually corrected (A-2-iii) to extract the color block distribution characteristic, color block sub-object distribution characteristic, critical point and contour distribution characteristic, and the edge points are connected into a line to form a characteristic straight line;
a-3) matching distribution characteristics to form a matching distribution characteristic set
And searching and matching the similar features according to the distribution feature vector directions, wherein the largest common sub-graph of the two distribution sub-graphs is a matching result, and all the matching feature points are added into a matching distribution feature set.
4. The rough three-dimensional contour model construction section based on local distribution characteristics according to claim 2, wherein the specific processing procedure of step B is as follows:
b-1) segmenting the image
Screening contour points of planes according to the relative variation of the unified distribution characteristics of adjacent two-dimensional plane images under different viewing angles, and connecting the contour points of the planes to segment the two-dimensional plane images;
b-2) estimating the rotation angle between adjacent two-dimensional planar images
The change rule of the width of the planar structure in the view plane imaging can be approximated by cosine function, and the rotation angle theta between the adjacent two-dimensional planar images can be calculated according to the angle and the length of the characteristic straight line extracted in the step A-2 a(i-1)-a(i)
B-3) constructing a rough three-dimensional contour model of the three-dimensional target object closed at a 360-degree viewing angle
And splicing adjacent two-dimensional plane images according to the matching distribution feature set and the rotation angle between the adjacent two-dimensional planes, and constructing a rough three-dimensional contour model of the target object, which is closed at a view angle of 360 degrees.
5. The angle mapping model optimization according to claim 1, characterized by comprising the steps of:
step C: generating an interpolated two-dimensional plane image;
step D: and judging whether the fineness of the three-dimensional contour model meets the requirement.
6. The optimization of the angle mapping model according to claim 5, wherein the specific process of step C is as follows:
c-1) calculating the transformation angle of the adjacent two-dimensional plane images
Adjacent two-dimensional planar images I by a three-dimensional contour model a(i-1) 、I a(i) Corresponding to the rotation angle theta a(i-1)-a(i) Calculating the corresponding transformation angle alpha a(i-1)-a(i) 、α a(i)-a(i-1) The method meets the following conditions:
α a(i-1)-a(i)a(i)-a(i-1) =θ a(i-1)-a(i)
c-2) calculating the adjacent two-dimensional plane image I a(i-1) 、I a(i) Depth map
C-3) least squares fitting I a(i-1) 、I a(i) The plane and the curved surface form a fitting surface
C-4) rotation I a(i-1) 、I a(i) Corresponding fitting surface
I a(i-1) 、I a(i) The corresponding fitting surfaces are respectively rotated by corresponding transformation angles alpha a(i-1)-a(i) 、α a(i)-a(i-1) Generate I' a(i-1) 、I′ a(i)
C-5) generating an interpolated two-dimensional planar image
Adopt perspective projection matrix to make I' a(i-1) 、I′ a(i) Projecting to the view angle of the frame insertion, fusing the overlapped areas according to the matched distribution feature set, scaling the two-dimensional plane image of the frame insertion, adjusting the distance between the two-dimensional plane image of the frame insertion and the center of the model, and obtaining I' a(u-1) 、I′ a(i) Median correction is carried out when deviation occurs in the overlapping area contrast after projection, and an interpolated two-dimensional plane image I is generated (a+1)(i-1)
C-6) generating a two-dimensional planar image P
Calculation I (a)(i+1) And by I (a+1)(i-1) The rotation angles between the two images are repeated in the steps (C-1), (C-2), (C-3), (C-4) and (C-5), and a two-dimensional plane image I is generated a(i) A two-dimensional plane image P with the same view angle in the three-dimensional contour model;
c-7) calculating the error of the interpolated two-dimensional plane image
Calculating a two-dimensional plane image I according to SSIM (structural similarity) indexes a(i) And three-dimensional contour model look-inSimilarity of two-dimensional planar images P of the same angle:
c-8) judging whether the error meets the requirement
SSIM (x, y) ranges from [ -1,1]The closer the SSIM (x, y) is to 1, the higher the similarity of the two pictures, when SSIM (x, y)>a (generally considered as a>0.7, the similarity of the two images is high; a, a>0.9, the human eyes are difficult to distinguish two images), and the generated two-dimensional plane image I is judged (a+1)(i-1) And (3) adding a three-dimensional contour model of the target object according to requirements, otherwise, adjusting parameters such as perspective projection matrix visual angle, projection plane aspect ratio and the like, and repeating the steps (C-5), (C-6), (C-7) and (C-8).
7. The optimization of the angle mapping model according to claim 5, wherein the specific process of step D is as follows:
d-1) continuously generating new interpolated two-dimensional plane images, adding the new interpolated two-dimensional plane images into a three-dimensional contour model, and updating a matching feature set and rotation angle information
Continuously generating new interpolated two-dimensional plane images through the step C, and updating the matching feature set and the rotation angle information theta according to the steps (A-2), (A-3), (B-1) and (B-2) ai
D-2) judging that the fineness of the three-dimensional contour model of the target object meets the requirement
The maximum rotation angle of the adjacent two-dimensional plane images in the three-dimensional contour model of the target object is theta max The set maximum rotation angle is theta s When theta is max ≤θ s And when the fineness of the three-dimensional contour model of the target object meets the requirement, stopping generating a new middle visual angle two-dimensional plane image, and outputting the three-dimensional contour model of the target object graph.
CN202311402928.8A 2023-10-27 2023-10-27 Rapid three-dimensional contour model construction method based on monocular vision Pending CN117392319A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311402928.8A CN117392319A (en) 2023-10-27 2023-10-27 Rapid three-dimensional contour model construction method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311402928.8A CN117392319A (en) 2023-10-27 2023-10-27 Rapid three-dimensional contour model construction method based on monocular vision

Publications (1)

Publication Number Publication Date
CN117392319A true CN117392319A (en) 2024-01-12

Family

ID=89462673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311402928.8A Pending CN117392319A (en) 2023-10-27 2023-10-27 Rapid three-dimensional contour model construction method based on monocular vision

Country Status (1)

Country Link
CN (1) CN117392319A (en)

Similar Documents

Publication Publication Date Title
CN110390640B (en) Template-based Poisson fusion image splicing method, system, equipment and medium
RU2215326C2 (en) Image-based hierarchic presentation of motionless and animated three-dimensional object, method and device for using this presentation to visualize the object
CN101916454B (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN105005755B (en) Three-dimensional face identification method and system
US11521311B1 (en) Collaborative disparity decomposition
KR101175097B1 (en) Panorama image generating method
CN111243071A (en) Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
Long et al. Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies
US20240169674A1 (en) Indoor scene virtual roaming method based on reflection decomposition
US11880935B2 (en) Multi-view neural human rendering
CN107767339B (en) Binocular stereo image splicing method
CN110246161B (en) Method for seamless splicing of 360-degree panoramic images
WO2021017589A1 (en) Image fusion method based on gradient domain mapping
Kordelas et al. State-of-the-art algorithms for complete 3d model reconstruction
EP1063614A2 (en) Apparatus for using a plurality of facial images from different viewpoints to generate a facial image from a new viewpoint, method thereof, application apparatus and storage medium
Lo et al. Image stitching for dual fisheye cameras
CN110245199A (en) A kind of fusion method of high inclination-angle video and 2D map
Xu et al. Hybrid mesh-neural representation for 3D transparent object reconstruction
CN116681839B (en) Live three-dimensional target reconstruction and singulation method based on improved NeRF
Fu et al. Image stitching techniques applied to plane or 3-D models: a review
CN117501313A (en) Hair rendering system based on deep neural network
Ali et al. Panoramic image construction using feature based registration methods
CN117173012A (en) Unsupervised multi-view image generation method, device, equipment and storage medium
CN116385619A (en) Object model rendering method, device, computer equipment and storage medium
CN116801115A (en) Sparse array camera deployment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination