CN110544300A - Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics - Google Patents

Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics Download PDF

Info

Publication number
CN110544300A
CN110544300A CN201910839867.9A CN201910839867A CN110544300A CN 110544300 A CN110544300 A CN 110544300A CN 201910839867 A CN201910839867 A CN 201910839867A CN 110544300 A CN110544300 A CN 110544300A
Authority
CN
China
Prior art keywords
image
edge
dimensional
plane
triangular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910839867.9A
Other languages
Chinese (zh)
Other versions
CN110544300B (en
Inventor
李刚
黄翰
黎一锴
刘子钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Jiuzhang Intelligent Technology Co Ltd
Original Assignee
Foshan Jiuzhang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Jiuzhang Intelligent Technology Co Ltd filed Critical Foshan Jiuzhang Intelligent Technology Co Ltd
Priority to CN201910839867.9A priority Critical patent/CN110544300B/en
Publication of CN110544300A publication Critical patent/CN110544300A/en
Application granted granted Critical
Publication of CN110544300B publication Critical patent/CN110544300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A method for automatically generating a three-dimensional model based on two-dimensional hand-drawn image features includes obtaining edge images, morphological transformation images and filling images to divide an original image into multiple layers, and combining the images to generate a depth image. After the depth image is obtained, the pixel points are lifted to corresponding heights in a three-dimensional space according to the number of layers of the pixel points, each 2 x 2 dot matrix in the three-dimensional space is processed to obtain a quadrilateral plane, the quadrilateral plane is divided into two triangular planes by utilizing diagonal pixel points, and a preliminary three-dimensional model is generated after the processing is performed in sequence. And after the primary three-dimensional model is generated, all triangular planes need to be verified, and the triangular planes are removed or reserved according to the gray values of the three pixel points to obtain the final three-dimensional model. The three-dimensional model consistent with the texture characteristics of the two-dimensional hand-drawn image is obtained on the premise of keeping high accuracy without accurate positioning, so that the robustness is greatly improved, the computing resources and equipment cost are reduced, and the method is suitable for real-time conversion.

Description

Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics
Technical Field
the invention relates to the field of image processing, in particular to a method for automatically generating a three-dimensional model based on two-dimensional hand-drawn image characteristics.
Background
In the prior art, methods for converting two-dimensional images into three-dimensional models mainly include three-dimensional reconstruction calibrated by a multi-azimuth camera, high-precision modeling based on Depth information, three-dimensional reconstruction based on RGB-Depth, and the like, but the methods all need hardware devices in the implementation process, such as a laser range finder for acquiring Depth information, a Kinect, Depth cameras with low consumption levels, and the like. The method cannot reconstruct a three-dimensional model through a two-dimensional image and cannot reproduce the contour texture characteristics of the original image.
Therefore, in the prior art, a method for converting a two-dimensional image into a three-dimensional image without additional image capturing equipment is also designed, but the method generally adopts a complex processing process, the calculation is complex, and the process of generating a depth image is also complex, so that the overall conversion process is low in efficiency and time-consuming, and cannot be applied to actual requirements and mass production. Meanwhile, the method has more unstable factors, the accuracy of converting the two-dimensional model into the three-dimensional model is low, and the contour texture characteristics of the original image cannot be reproduced.
Disclosure of Invention
The invention aims to overcome at least one defect of the prior art, and provides a method for automatically generating a three-dimensional model based on two-dimensional hand-drawn image features, which has higher accuracy in identifying and extracting the two-dimensional image features, can obtain the three-dimensional model consistent with the texture features of the two-dimensional hand-drawn image, and has simple calculation process and higher efficiency in generating the three-dimensional model.
The invention adopts the technical scheme that a method for automatically generating a three-dimensional model based on two-dimensional hand-drawn image characteristics comprises the following steps:
s1, extracting image edges after carrying out edge preservation filtering on the hand drawing image to obtain an edge image; the preprocessing is carried out through the commonly used edge preserving filtering, so that the accurate extraction of the subsequent edges is facilitated, and the accuracy is improved; after the edge preserving filtering processing, the identification features of the two-dimensional image are extracted, and the features are also the main features for distinguishing the main body area of the image, thereby being beneficial to forming a depth image by subsequently combining other images.
S2, performing high-level morphological transformation on the edge image obtained in the S1 to close part of the unclosed edge to obtain a morphological transformation image; the high-level morphological transformation comprises common opening operation, closing operation, morphological gradient, top hat and black hat, the processing result of each morphological operation on the same image is different, and the morphological operation can be selected according to the lines and the background of the image, the requirements of a user and the like.
S3, filling the external background of the outermost outline of the morphological transformation image in the S2 to obtain a filled image; the outline is an outline generated after the edge extraction, and the outermost outline is the outermost periphery of the pattern main body in the two-dimensional image, and the features of the pattern main body and the non-pattern main body can be extracted and distinguished by filling the outer background of the outermost periphery.
S4, combining the edge images, the morphological transformation images and the filling images to obtain three-layer distribution depth images; and combining the edge images, the morphological transformation images and the filling images respectively obtained in the steps S1, S2 and S3 to obtain a depth image represented by three-level division of an outer background, a contour edge and an inner contour region. The outer background is the outer background in S3, and the contour edge is the contour generated by the extraction edge, and the inner contour region is the contour surrounding or the region between the contour and the contour.
s5, on the basis of the three-layer distribution depth image, according to the number of layers of the pixel points, the pixel points are lifted to corresponding heights in the three-dimensional space, and each 2 x 2 dot matrix in the three-dimensional space is processed to obtain a quadrilateral plane, namely, four pixel points are connected in each 2 x 2 dot matrix to form a quadrilateral plane, wherein the quadrilateral plane comprises a rectangular plane and a rhombic plane; and dividing a quadrilateral plane into two triangular planes by utilizing diagonal pixel points, and sequentially processing all the 2 multiplied by 2 matrixes to generate a preliminary three-dimensional model. And (4) according to the number of layers of the pixel points, the pixel points are lifted by corresponding heights, namely, the multiple layers of the depth image are lifted by the corresponding heights, so that a three-dimensional model with three-dimensional effect is generated. The triangular plane comprises a triangular plane formed by non-same-level pixels and a triangular plane formed by same-level pixels, the triangular plane formed by the non-same-level pixels is recorded as a connecting surface, and the triangular plane formed by the same-level pixels is recorded as a horizontal plane; the gray values of the three connecting pixel points are inconsistent on the connecting surface between non-same levels, the gray values of the three connecting pixel points are consistent on the horizontal plane, and the connecting surface between the non-same levels forms the three-dimensional effect of each edge of the pattern.
And S6, verifying all the triangular planes, removing or reserving the triangular planes according to the gray values of the three pixel points corresponding to the triangular planes, and sequentially processing all the triangular planes to obtain the final three-dimensional model. Different three-dimensional models can be generated according to the elimination standard, and if the gray values of three connecting pixel points corresponding to the triangular plane are all 0, the three connecting pixel points are all located on the 0 th layer. When the image of the 0 th layer represents the non-pattern main body area, the 0 th layer does not belong to the pattern main body part of the image, the pattern main body is only reserved after the elimination, when the gray values of the three connecting pixel points corresponding to the triangular plane are all 0, the non-pattern main body area is reserved on the basis of the pattern reservation without the elimination. The pattern body includes the outline-containing region and the outline itself.
The edge is taken as one of the main features of the image, and in the process, edge preserving filtering is firstly carried out and then edge extraction is carried out, so that the accuracy of two-dimensional image identification features is improved, and the screening or detection steps after the edge extraction are reduced, thereby realizing rapid identification and extraction of the two-dimensional image features; the invention realizes other-level images by using morphological transformation and a flooding filling algorithm, thereby realizing the combined generation of the depth image. In the process, the processing process is simple, and the calculation process of the related algorithm is simple, so that the conversion efficiency from the two-dimensional image to the depth image is improved. After the depth images are formed by combination, only the heights of the pixel points at different levels are required to be increased and the pixel points are divided to form a plane, so that a preliminary three-dimensional model with a three-dimensional effect can be generated; after the primary three-dimensional model is generated, the triangular plane is removed and reserved only by detecting the gray value, and a final three-dimensional model is generated. Therefore, the whole process of converting two-dimensional images into three-dimensional images is easy to operate and simple to calculate, and the conversion of single two-dimensional images can be realized in a short time. Namely, the invention can be suitable for real-time scenes and meets the requirement of instant conversion. Meanwhile, the invention takes the edge image as the main identification feature, so the edge feature is well preserved, and the establishment of the three-dimensional model is realized through the height feature and the gray value divided by a plurality of layers, so the texture of the pattern can be expressed, namely the texture feature in the two-dimensional image is reproduced.
Preferably, in step S1, a Canny edge detection algorithm is used to extract the image edges; the edge detection has more algorithms, including a Sobel operator, a Canny operator and a Laplacian operator, wherein the Sobel operator has a better image processing effect on gradual change of gray and more noise, but has lower accuracy on edge positioning; the Laplacian operator is sensitive to noise; canny is not easily interfered by noise and can detect strong edges and weak edges. The method utilizes double thresholds for monitoring, and the detection of the edge is more accurate, so that the Canny edge detection algorithm is adopted to be beneficial to the construction of a subsequent depth image.
On the premise of using the Canny operator to detect the edge, the step S1 specifically includes:
(1) performing edge preserving filtering on the hand-drawn image, and preserving the processed image;
(2) Setting high and low thresholds of a Canny edge detection algorithm, wherein the selection of the high and low thresholds depends on the content of a given image; preferably, the low threshold value of the high and low threshold values is generally set to 35, the high threshold value is generally set to 70, and the process of changing the threshold value is reduced and is suitable for most images.
(3) Calculating the gradient strength and direction of each pixel point in the image, marking the edge pixel as a strong edge pixel if the gradient value of the edge pixel is higher than a high threshold value, marking the edge pixel as a weak edge pixel if the gradient value of the edge pixel is smaller than the high threshold value and larger than a low threshold value, inhibiting the edge pixel if the gradient value of the edge pixel is smaller than the low threshold value, and rejecting the inhibited edge pixel;
(4) And carrying out threshold detection on all edge pixel points to obtain an edge image.
the method has the advantages that the Canny operator is used for extracting the image edge, the calculation process is simple, the calculation time is reduced, and the edge detection efficiency is improved.
Preferably, the advanced morphological transformation in step S2 uses a closed operation, that is, the process of closing a part of the unclosed edge uses a closed operation in the morphological transformation, and uses a kernel calculation with a reference point located at the center by 5 × 5, to continuously perform dilation and then erosion on the edge. The closed operation can eliminate a small black area, eliminate isolated points lower than adjacent points, achieve the denoising effect, smooth the object contour, close narrow gaps and elongated gullies, eliminate small holes and fill up fractures in the contour line.
Preferably, in step S3, the Flood filling algorithm (Flood Fill) is used to Fill the external background, starting from the top left corner and the bottom right corner of the image, and the external background is filled with white color, so as to generate a filled image, and the filled image obtained by using the Flood filling algorithm may be referred to as a Flood filled image. The flooding operation defines a mask image with a single channel which is two pixels longer than the length and width of the original image, and then flooding filling is started from a pixel point positioned at the upper left corner (1, 1) and a pixel point positioned at the lower right corner (minus 1, -1); and preprocessing the pixel points of the 0 th row, the 0 th column and the last row and the last column to keep consistent with the corresponding positions of the morphological transformation image, and filling the background outside the outline of the outermost layer of the image into white.
Preferably, the specific step of generating the depth image in step S4 includes:
(1) Inverting the filled image to convert the outer background of the outermost outline from white to black and convert the area contained in each inner outline from black to white;
(2) Representing the edge position on the depth image according to the edge image and the form transformation image information, and representing the edge position by using gray;
(3) noting that the edge image is edge, the morphological transformation image is shape, the filling image is flow, the depth image is depth, and the ith pixel point information of the depth image is: the depth [ i ] - < 255-flow [ i ] -int (edge [ i ]/2) + shape [ i ], wherein int is a rounding operation, and edge [ i ], shape [ i ], flow [ i ] and depth [ i ] are gray values of the ith pixel point in the edge image, the form transformation image, the filling image and the depth image respectively; wherein i is a positive integer, and the maximum value is the total number of pixel points of the image.
(4) In the generated depth image, the gray value of the external background of the outermost contour is 0, the gray value of the edge of each contour is G, and the gray value of the internal contour containing area is 255, wherein G is more than 0 and less than 255; preferably, G is 127 for easy calculation and to highlight the difference between the contour edge and other regions.
Preferably, after the triangular plane is obtained in step S5, since part of the triangular plane is a connection surface between different levels in the depth image, and part of the triangular plane is a horizontal plane on the single-level image, to smooth an edge between the connection surface and the horizontal plane, the edge between the connection surface and the horizontal plane is preprocessed by using gaussian blur, where a gaussian convolution kernel uses a 5 × 5 matrix, a sigmaX value takes 0, and the sigmaX is a standard deviation of a gaussian kernel function in the X direction.
Preferably, step S6 specifically includes:
(1) Transmitting the values of the eliminating layers to an interface of a depth image model-transferring function, forming a three-dimensional model by the depth image through the model-transferring function, and providing a plurality of interfaces for transmitting parameters by the model-transferring function so as to realize a final three-dimensional model under each parameter; the default incoming rejection layer number is-1, if the gray value of a pixel point corresponding to the triangular plane is 0, the triangular plane is not rejected, and the generated model is represented as a background of a reserved bottom plane; and (3) transmitting a rejection layer number of 0, rejecting the triangular plane if the gray value of the pixel point corresponding to the triangular plane is 0, generating a model, expressing that the bottom plane is removed, and only retaining model information comprising height and surface characteristics, wherein the surface characteristics comprise surface textures. Namely, the 0 th layer represents an external background layer in the depth image, when the gray values of the corresponding pixels of the triangular plane are all 0, the triangular plane represents a horizontal plane in the external background layer, all triangles of the type are removed, namely, the external background layer is removed, namely, the bottom plane of the image is removed.
(2) And detecting all triangular planes according to the transmitted rejection layer number, and outputting the final three-dimensional model after rejection and reservation.
Preferably, the interface of the depth image-to-model function comprises a setting entry of the height and the width of the model generation. The interface has added humanized settings such as controllable model generation height and width, helps carrying out certain degree to the scope of model output and roughly size accuse, helps directly outputting the corresponding model according to the user's demand, provides the higher self-defined setting of degree of freedom.
Compared with the prior art, the invention has the beneficial effects that: the invention directly processes the static picture, and obtains the edge image through edge retention filtering and edge extraction, thereby improving the accuracy of the two-dimensional image identification characteristic. And the extraction of the edge image is not only all edges of the whole image, so that the texture characteristics of the two-dimensional hand-drawn image can be reserved. The method also obtains a morphological transformation image and a filling image by simple operation on the basis of the edge image, and generates a depth image by combining the existing three layers of images. The process does not need image capturing equipment, and is easy to operate and simple in calculation. The conversion efficiency is improved while the accuracy of image feature recognition and extraction is ensured. After obtaining the depth images of three levels of the background, the contour edge and each contour inside the image except the outermost contour, generating a preliminary three-dimensional model by promoting the pixel points on the corresponding levels. The process only needs to change the position of the pixel point, thereby simplifying the calculation process and reducing the calculation amount. Even if the subsequent verification is carried out, only the gray value detection is needed, so that the time consumption is obviously reduced, the efficiency is improved in the process of integrally generating the three-dimensional model, the method can be suitable for the instant generation of the three-dimensional model, the actual application requirements are better met, and the mass production is facilitated. Compared with the prior art, the method can obtain the three-dimensional model consistent with the texture characteristics of the two-dimensional hand-drawn image on the premise of keeping high accuracy without accurate positioning, greatly improves robustness, reduces computing resources and equipment cost, and is suitable for real-time conversion.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a sample image of example 1.
Fig. 3 is an edge image of example 1.
fig. 4 is a morphological transformation image of example 1.
Fig. 5 is a flood fill image of example 1.
fig. 6 is a depth image generated by the combination in embodiment 1.
FIG. 7 is the final three-dimensional model of example 1.
fig. 8 is a sample image of example 2.
FIG. 9 is the final three-dimensional model of example 2.
FIG. 10 is a schematic diagram of the partitioning of three-dimensional modeling in the prior art.
FIG. 11 is a schematic diagram of a three-dimensional model constructed according to the prior art.
Detailed Description
the drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Example 1
As shown in fig. 1, the method for automatically generating a three-dimensional model based on two-dimensional hand-drawn image features includes the following steps:
And S1, performing edge extraction on the input image.
As shown in fig. 2, a sample artwork hand-drawn image is input, edge preserving filtering is performed on the sample artwork hand-drawn image, then high and low thresholds are set to be 75 and 30 respectively, the Canny operator is adopted to extract the edge of the image, the gradient strength and the direction of each pixel point in the image are calculated, the gradient value of the edge pixel is higher than the high threshold and is marked as a strong edge pixel, the gradient value of the edge pixel is smaller than the high threshold and is larger than the low threshold and is marked as a weak edge pixel, the gradient value of the edge pixel is smaller than the low threshold and is inhibited, and the inhibited edge pixel is removed; after all suppressed edge pixels are removed, an edge image is obtained, as shown in fig. 3.
And S2, performing morphological transformation on the edge image.
performing closed operation on the edge image by using a morphological transformation method to close the extracted part of the edge which is not closed, performing kernel calculation by using a kernel with a reference point positioned at the center of 5 multiplied by 5, and continuously performing expansion and corrosion on the edge; finally, the morphologically transformed image, i.e., the morphologically transformed image, is obtained, as shown in fig. 4.
and S3, filling the double corners by flooding.
According to the obtained morphological transformation image, a mask image with a single channel which is two pixels longer than the original image in length and width is defined, and then flooding filling is started from a pixel point positioned at the upper left corner (1, 1) and a pixel point positioned at the lower right corner (-1, -1); the preprocessing of the pixel points of the 0 th row, the 0 th column and the last row and the last column are consistent with the corresponding positions of the morphological transformation image, and finally the background outside the outline of the outermost layer of the image is filled to be white to obtain a flood filled image, as shown in fig. 5.
And S4, forming a depth image by combining the edge image, the morphological transformation image and the flooding image.
(1) Inverting the filled image to convert the outer background outside the outermost outline from white to black and convert the area contained in each inner outline from black to white;
(2) Representing the edge position on the depth image according to the edge image and the form transformation image information, and representing the edge position by using gray;
(3) noting that the edge image is edge, the morphological transformation image is shape, the filling image is flow, the depth image is depth, and the ith pixel point information of the depth image is: the depth [ i ] - < 255-flow [ i ] -int (edge [ i ]/2) + shape [ i ], wherein int is a rounding operation, and edge [ i ], shape [ i ], flow [ i ] and depth [ i ] are gray values of the ith pixel point in the edge image, the form transformation image, the filling image and the depth image respectively; wherein i is a positive integer, and the maximum value is the total number of pixel points of the image.
(4) In the generated depth image, the gray value of the outer background of the outermost contour is 0, the gray value of the edge of each contour is 127, and the gray value of the inner contour-containing region is 255.
Finally, a depth image represented by three-level division of an external background, a contour edge and an internal contour region is obtained, as shown in fig. 6.
and S5, preliminarily constructing a three-dimensional model according to the pixel point information of the depth map.
The method comprises the steps of correspondingly lifting a certain height in a three-dimensional space according to different layers of pixel points, then connecting the pixel points contained in each 2 x 2 matrix in a depth image to form a quadrilateral plane, then segmenting the pixel points at opposite angles in the matrix into two triangular planes, sequentially processing all the 2 x 2 matrices, and finally generating a primary three-dimensional model. The method comprises the steps that after a quadrilateral plane is segmented to obtain a triangular plane, preprocessing is needed to generate a preliminary three-dimensional model, the triangular plane is divided into a triangular connecting surface consisting of non-same-level pixel points and a triangular horizontal surface consisting of same-level pixel points in a depth image, in order to smooth the edge between the connecting surface and the horizontal surface, preprocessing is carried out on the edge between the connecting surface and the horizontal surface by adopting Gaussian blur, a Gaussian convolution kernel adopts a 5X 5 matrix, the value of sigmaX is 0, and the sigmaX is the standard deviation of a Gaussian kernel function in the X direction.
And S6, further detecting the preliminary three-dimensional model.
and transmitting the value-1 of the rejection layer to an interface of a depth image-to-model function, reserving a triangular plane with the gray value of the corresponding pixel point being 0, namely reserving the background of the bottom plane and the pattern characteristics of the image, and outputting a final three-dimensional model as shown in fig. 7.
Example 2
inputting a sample of a new sample artwork hand-drawn image, wherein the sample two-dimensional image is shown in FIG. 8;
the specific steps S1-S6 in example 1 were repeated, and the same processing parameters were used to perform the same processing as the sample in example 1 on the image, to obtain a corresponding final three-dimensional model, which is shown in fig. 9.
in the example of converting a hand-drawn flower into a three-dimensional model in the prior art, the document "modeling a sketch-type sketched three-dimensional flower," the first author discloses a process for converting a sketched flower into a three-dimensional model, in which, as shown in fig. 10, not only a contour curve of a sketched image but also a side projection curve and a shape and a position of petals need to be extracted. In the actual calculation process, the clustering analysis is needed to be carried out on partial end points of the curve, the area analysis and construction are needed to be carried out on the curved surface, the spatial position calculation is carried out according to the rotation and translation components, more calculation is involved in the process, and the processing process is complex. Meanwhile, the method has unicity, can only carry out three-dimensional modeling on the curved petals, cannot be applied to other images, can only be used for grass drawing with some simple characteristics at most, and cannot keep surface texture characteristics, as shown in fig. 11.
In the embodiment, the conversion of the three-dimensional model is realized only by adopting edge detection, filling, depth image and pixel gray level processing, compared with the prior art, the method has the advantages of simple processing process and easy calculation, not only ensures the accuracy of edge detection and texture feature restoration, but also realizes high-efficiency conversion. The method realizes the modeling of the complex characteristic two-dimensional hand-drawn image, saves a large amount of complicated calculation processes, has high-efficiency characteristics, and can be applied to instant conversion and industrial mass production.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (10)

1. A method for automatically generating a three-dimensional model based on two-dimensional hand-drawn image features is characterized by comprising the following steps:
S1, extracting image edges after carrying out edge preservation filtering on the hand drawing image to obtain an edge image;
s2, performing high-level morphological transformation on the edge image to close part of the unclosed edge to obtain a morphological transformation image;
s3, filling the external background of the outermost outline of the morphological transformation image to obtain a filled image;
S4, combining the edge images, the morphological transformation images and the filling images to obtain three-layer distribution depth images;
S5, on the basis of three-layer distribution depth images, enabling pixel points to be lifted to corresponding heights in a three-dimensional space according to the number of layers of the pixel points, processing every 2 x 2 dot matrix in the three-dimensional space to obtain a quadrilateral plane, dividing the quadrilateral plane into two triangular planes by utilizing diagonal pixel points, and generating a preliminary three-dimensional model after processing all the 2 x 2 dot matrixes;
And S6, verifying all the triangular planes, removing or reserving the triangular planes according to the gray values of the three pixel points corresponding to the triangular planes, and verifying all the triangular planes to obtain the final three-dimensional model.
2. The method according to claim 1, wherein in step S1, Canny edge detection algorithm is used to extract the image edges.
3. The method of claim 1, wherein the high-level morphological transformation is performed by a closed-loop operation.
4. The method according to claim 1, wherein the step S3 is performed by using a flood filling algorithm, and starting from the top left corner and the bottom right corner of the image, the method fills the external background with flood, and fills the external background with white.
5. the method according to claim 4, wherein the step S4 specifically includes:
(1) Inverting the filled image to convert the outer background of the outermost outline from white to black and convert the area contained in each inner outline from black to white;
(2) Representing the edge position on the depth image according to the edge image and the form transformation image information, and representing the edge position by using gray;
(3) Noting that the edge image is edge, the morphological transformation image is shape, the filling image is flow, the depth image is depth, and the ith pixel point information of the depth image is: the depth [ i ] - < 255-flow [ i ] -int (edge [ i ]/2) + shape [ i ], wherein int is a rounding operation, and edge [ i ], shape [ i ], flow [ i ] and depth [ i ] are gray values of the ith pixel point in the edge image, the form transformation image, the filling image and the depth image respectively;
(4) In the generated depth image, the gray value of the outer background of the outermost contour is 0, the gray value of the edge of each contour is G, and the gray value of the inner contour containing area is 255, wherein 0 < G < 255.
6. The method for automatically generating a three-dimensional model based on the two-dimensional hand-drawn image features as claimed in claim 2, wherein the step S1 specifically includes:
(1) performing edge preserving filtering on the hand-drawn image, and preserving the processed image;
(2) setting a high-low threshold value of a Canny edge detection algorithm;
(3) Calculating the gradient strength and direction of each pixel point in the image, marking the edge pixel as a strong edge pixel if the gradient value of the edge pixel is higher than a high threshold value, marking the edge pixel as a weak edge pixel if the gradient value of the edge pixel is smaller than the high threshold value and larger than a low threshold value, and inhibiting the edge pixel if the gradient value of the edge pixel is smaller than the low threshold value;
(4) And carrying out threshold detection on all edge pixel points to obtain an edge image.
7. The method of claim 6, wherein the low threshold is set to 35 and the high threshold is set to 70.
8. the method according to claim 1, wherein after the triangular plane is obtained in step S5, the triangular plane is divided into a triangular connecting plane formed by non-hierarchically arranged pixels and a triangular horizontal plane formed by hierarchically arranged pixels, and the edges between the connecting plane and the horizontal plane are preprocessed by gaussian blur to smooth the edges between the connecting plane and the horizontal plane.
9. The method for automatically generating a three-dimensional model based on the two-dimensional hand-drawn image features as claimed in claim 1, wherein the step S6 specifically includes:
(1) Transmitting the eliminating layer numerical value to an interface of a depth image model conversion function; the default incoming rejection layer number is-1, if the gray value of a pixel point corresponding to the triangular plane is 0, the triangular plane is not rejected, and the generated model is represented as a background of a reserved bottom plane; the number of transmitted rejection layers is 0, if the gray value of the pixel point corresponding to the triangular plane is 0, the triangular plane is rejected, the generated model is expressed as removing a bottom plane, and only model information comprising height and surface characteristics is reserved;
(2) and detecting all triangular planes according to the rejection layer number and outputting a final three-dimensional model.
10. the method of claim 9, wherein the interface of the depth image-to-model function comprises a setting entry of a height and a width of the model generation.
CN201910839867.9A 2019-09-05 2019-09-05 Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics Active CN110544300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910839867.9A CN110544300B (en) 2019-09-05 2019-09-05 Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910839867.9A CN110544300B (en) 2019-09-05 2019-09-05 Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics

Publications (2)

Publication Number Publication Date
CN110544300A true CN110544300A (en) 2019-12-06
CN110544300B CN110544300B (en) 2021-06-29

Family

ID=68712800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910839867.9A Active CN110544300B (en) 2019-09-05 2019-09-05 Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics

Country Status (1)

Country Link
CN (1) CN110544300B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179423A (en) * 2020-01-02 2020-05-19 国网福建省电力有限公司检修分公司 Three-dimensional infrared image generation method based on two-dimensional infrared image
CN112102155A (en) * 2020-09-09 2020-12-18 青岛黄海学院 System and method for converting planar design into non-planar design
CN112348103A (en) * 2020-11-16 2021-02-09 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN113267502A (en) * 2021-05-11 2021-08-17 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN116168214A (en) * 2023-04-25 2023-05-26 浙江一山智慧医疗研究有限公司 Medical image texture feature extraction method, device and application

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194872A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Body scan
CN104850633A (en) * 2015-05-22 2015-08-19 中山大学 Three-dimensional model retrieval system and method based on parts division of hand-drawn draft
CN105518743A (en) * 2013-03-13 2016-04-20 微软技术许可有限责任公司 Depth image processing
CN106327451A (en) * 2016-11-28 2017-01-11 玉溪师范学院 Image restorative method of ancient animal fossils
CN107784626A (en) * 2017-11-21 2018-03-09 西北农林科技大学 A kind of 3-dimensional digital intaglio rilevato generation method based on single image
CN107909646A (en) * 2017-11-17 2018-04-13 浙江工业大学 A kind of three-dimensional modeling method based on flat image
US10339714B2 (en) * 2017-05-09 2019-07-02 A9.Com, Inc. Markerless image analysis for augmented reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194872A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Body scan
CN105518743A (en) * 2013-03-13 2016-04-20 微软技术许可有限责任公司 Depth image processing
CN104850633A (en) * 2015-05-22 2015-08-19 中山大学 Three-dimensional model retrieval system and method based on parts division of hand-drawn draft
CN106327451A (en) * 2016-11-28 2017-01-11 玉溪师范学院 Image restorative method of ancient animal fossils
US10339714B2 (en) * 2017-05-09 2019-07-02 A9.Com, Inc. Markerless image analysis for augmented reality
CN107909646A (en) * 2017-11-17 2018-04-13 浙江工业大学 A kind of three-dimensional modeling method based on flat image
CN107784626A (en) * 2017-11-21 2018-03-09 西北农林科技大学 A kind of 3-dimensional digital intaglio rilevato generation method based on single image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HUANG, PEIZHENG等: "Stereo Vision and Fourier Transform Profilometry for 3D Measurement", 《APPLICATIONS OF DIGITAL IMAGE PROCESSING XLI 》 *
刘晓平等: "手绘二维曲线自动生成三维模型研究", 《工程图学学报》 *
曾琼: "面向感知的单幅图像分层深度信息计算", 《中国博士学位论文全文数据库 信息科技辑》 *
马玲 等: "基于简单手绘的树木快速建模", 《***仿真学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179423A (en) * 2020-01-02 2020-05-19 国网福建省电力有限公司检修分公司 Three-dimensional infrared image generation method based on two-dimensional infrared image
CN111179423B (en) * 2020-01-02 2023-04-07 国网福建省电力有限公司检修分公司 Three-dimensional infrared image generation method based on two-dimensional infrared image
CN112102155A (en) * 2020-09-09 2020-12-18 青岛黄海学院 System and method for converting planar design into non-planar design
CN112348103A (en) * 2020-11-16 2021-02-09 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN112348103B (en) * 2020-11-16 2022-11-11 南开大学 Image block classification method and device and super-resolution reconstruction method and device thereof
CN113267502A (en) * 2021-05-11 2021-08-17 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN113267502B (en) * 2021-05-11 2022-07-22 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN116168214A (en) * 2023-04-25 2023-05-26 浙江一山智慧医疗研究有限公司 Medical image texture feature extraction method, device and application

Also Published As

Publication number Publication date
CN110544300B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN110544300B (en) Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN104835164B (en) A kind of processing method and processing device of binocular camera depth image
CN108352083B (en) 2D image processing for stretching into 3D objects
CN111275696B (en) Medical image processing method, image processing method and device
CN103745468B (en) Significant object detecting method based on graph structure and boundary apriority
CN103699900B (en) Building horizontal vector profile automatic batch extracting method in satellite image
CN107622480B (en) Kinect depth image enhancement method
CN111563908B (en) Image processing method and related device
US10204447B2 (en) 2D image processing for extrusion into 3D objects
CN110268442B (en) Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
CN108399644A (en) A kind of wall images recognition methods and its device
Mücke et al. Surface Reconstruction from Multi-resolution Sample Points.
Telea Feature preserving smoothing of shapes using saliency skeletons
CN109448093B (en) Method and device for generating style image
CN112365516B (en) Virtual and real occlusion processing method in augmented reality
JP2014106713A (en) Program, method, and information processor
CN111179289B (en) Image segmentation method suitable for webpage length graph and width graph
CN111127622A (en) Three-dimensional point cloud outlier rejection method based on image segmentation
CN116012393A (en) Carton point cloud segmentation method, device and processing equipment
CN116309284A (en) Slope top/bottom line extraction system and method
CN113077504B (en) Large scene depth map generation method based on multi-granularity feature matching
CN112749713B (en) Big data image recognition system and method based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant