CN113742810B - Scale identification method and three-dimensional model building system based on copy - Google Patents

Scale identification method and three-dimensional model building system based on copy Download PDF

Info

Publication number
CN113742810B
CN113742810B CN202010468928.8A CN202010468928A CN113742810B CN 113742810 B CN113742810 B CN 113742810B CN 202010468928 A CN202010468928 A CN 202010468928A CN 113742810 B CN113742810 B CN 113742810B
Authority
CN
China
Prior art keywords
scale
module
dimensional model
window
door
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010468928.8A
Other languages
Chinese (zh)
Other versions
CN113742810A (en
Inventor
宋璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202010468928.8A priority Critical patent/CN113742810B/en
Publication of CN113742810A publication Critical patent/CN113742810A/en
Application granted granted Critical
Publication of CN113742810B publication Critical patent/CN113742810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Optimization (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a scale identification method, which comprises the steps of selecting a digital area and a scale area by adopting a scale area determination network frame determined by deep learning, determining size marking pairs by bidirectional overlapping degree, and determining an optimal unit scale for representing the corresponding actual size of unit pixels by integrating the actual size of all the size marking pairs and the size on a graph, so as to determine the scale, so that the scale identification method has stronger adaptability, wider application range and higher identification accuracy. The three-dimensional model building system based on the copy image is also disclosed, the image is uploaded through the image uploading module through the entering and user interaction behavior of the copy image, and determining an accurate scale through a scale determining module, and finally producing the house type 3D model which can be used for home decoration design through a wall door and window generating module and a three-dimensional model generating module.

Description

Scale identification method and three-dimensional model building system based on copy
Technical Field
The invention relates to the field of image processing, in particular to a scale identification method and a three-dimensional model building system based on a copy.
Background
The scale is a necessary graphic technical language in the drawing field, and is used for referring to the corresponding relation between an object on a drawing and an actual physical size, and the typical scale comprises two parts, namely a number and a scale. Wherein the numerals are Arabic numerals, which refer to the actual physical dimensions of the corresponding scale, and the units are usually millimeters. The scale mark is generally I-shaped or -shaped, the line segment parallel to the digital direction refers to the direction of the scale and the corresponding size on the drawing, and the two ends of the scale mark are provided with cut-off mark line segments perpendicular to the scale mark, so that the start and stop positions of the scale are indicated. Typically, a drawing will have multiple scales to identify the dimensions of different locations.
At present, the informatization of objects described by drawings mainly depends on manual alignment of scales and input of corresponding dimensions, so that on one hand, the deviation of manual alignment exists, and on the other hand, the improvement of production efficiency is not facilitated.
The patent application with the application publication number of CN 110414477A discloses an image scale detection method and device, and the method comprises the following steps: (1) Acquiring an image to be detected, wherein the image to be detected comprises a target size labeling diagram, and the target size labeling diagram comprises a size text sub-diagram and a size boundary sub-diagram corresponding to the size Wen Benzi diagram; (2) Identifying a size text in the target size labeling diagram to obtain an actual size; (3) Detecting the position of a size boundary line in the target size labeling diagram, and determining the size on the diagram according to the position of the size boundary line; (4) And determining the scale of the image to be measured according to the actual size and the on-graph size. The patent application solves the problem of manual labeling deviation to a certain extent and improves the generation efficiency, but the patent application still has a plurality of obvious defects, specifically including:
not enough: when the target size marking graphic of the image is determined in the step (1), only the marking graphic which has a standard scale consisting of numbers, size lines and size boundaries and is marked on the outer side of the whole image can be determined, and the image scale detection method cannot be processed, namely has strong application limitation, aiming at the marking graphic which has no size line and size boundary and is marked on the inner side of the image as shown in fig. 3.
And the following two are that: in the step (3), when the boundary line position is determined, by detecting the pixel sum of each sequence perpendicular to the marking direction from two ends of the size text graphic, when the difference value of the pixel sums of two adjacent sequences is larger than a preset threshold value, the coordinate information of the corresponding sequence of the sub-graphic of the size text is taken as the size boundary line position, the method can only detect the situation that the digital mark is contained in the size boundary line, and for the situation that the digital mark is large and is arranged outside the size boundary line, the method for judging the size boundary line by detecting the pixel of each sequence perpendicular to the marking direction from two ends of the size text graphic is obviously incapable of identifying the size boundary line in the situation, or directly taking the size boundary line of other scales as the size boundary line of the current scale, namely the image scale detection method can miss detection or detect result errors when detecting the scale.
And thirdly, when the drawing is slightly inclined or rotated, the size limit cannot be determined in the mode of the step (3), and the identification of the reproduction drawing is plagued due to the deviation of the scale and the noise of the photo, so that the scale cannot be accurately identified, namely the image scale detection method has strong application limitation.
And (4) only removing the edge text when determining the scale according to the actual size and the size on the graph, selecting the intermediate text for output mode, so that the result of correct edge recognition is easily removed, and displaying the text which is incorrectly recognized in the middle as the final result, thereby leading to incorrect recognition result.
Disclosure of Invention
In view of the above, the present invention aims to provide a scale identification method, so as to solve the problems of inaccurate scale identification and no universality in the prior art.
Another object of the present invention is to provide a three-dimensional model building system based on a copy, so as to solve the problem that the dimension of the label does not correspond to the real physical dimension caused by building a three-dimensional model directly according to the copy.
In order to achieve the above object, a first aspect of the present invention provides a scale identification method, including the following steps:
acquiring an image to be detected, wherein the image to be detected comprises a dimension marking diagram, and the dimension marking diagram comprises a digital diagram or a scale diagram;
determining a digital diagram and a scale diagram in the network identification image to be tested by adopting a pre-trained scale region, and selecting the digital region and the scale region by a frame;
calculating a first overlapping degree of the digital region relative to the scale region and a second overlapping degree of the scale region relative to the digital region, and screening a group of digital regions and scale regions with the largest sum of the first overlapping degree and the second overlapping degree as dimension marking pairs;
performing text recognition on the size marking centering digital region by adopting a text recognition network to obtain the actual size, and calculating the distance of the border of the size marking centering ruler region along the digital writing direction as the size on the graph;
and determining an optimal unit scale representing the actual size corresponding to the unit pixel by combining the actual size of all the dimension marking pairs and the dimension on the graph, and determining a scale according to the optimal unit scale.
In one embodiment, the step of integrating the actual sizes of all the dimension marking pairs and the dimension determining unit scale on the figure to represent the corresponding actual size of the unit pixel includes:
and fitting the actual sizes of all the size marking pairs and the sizes on the graph by adopting a least square method, wherein the slope of a fitting straight line is an optimal unit scale.
In another embodiment, the integrating the actual sizes of all the dimension-marked pairs and the on-drawing size determination means that the unit pixel corresponds to the actual size includes:
calculating a unit scale belonging to each dimension marking pair according to the actual dimension of each dimension marking pair and the dimension on the drawing;
and counting the corresponding unit scales for all the size labels, and taking the unit scale corresponding to the median as the optimal unit scale.
In another embodiment, the screening the set of digital regions and scale regions with the largest sum of the first overlap and the second overlap as the sizing pair includes:
firstly, screening the first overlapping degree and the second overlapping degree by utilizing an overlapping degree threshold value, and reserving the first overlapping degree and the second overlapping degree which are larger than the overlapping degree threshold value;
and then, screening a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree as dimension marking pairs.
In order to achieve the other purpose, the embodiment of the invention provides a three-dimensional model building system based on a copy graph, which comprises an image uploading module, a scale determining module, a wall door and window generating module and a three-dimensional model generating module;
the image uploading module is used for receiving the copy image uploaded by the user, storing the copy image in the cloud and displaying the image;
the scale determining module is used for carrying out scale identification on the copy, and comprises the steps of carrying out automatic identification by adopting the scale identification method and setting the scale by a user so as to determine the final scale;
the wall door and window generating module is used for automatically generating walls and doors and windows according to the determined final scale or manually drawing the walls and the doors and windows so as to determine the final walls and the doors and windows;
the three-dimensional model generation module is used for generating a three-dimensional model according to the determined final wall and door and window.
In one embodiment, the scale determining module includes a scale automatic generating module, a scale confirming module, a scale editing module and a scale recording module;
the automatic scale generating module is arranged at the cloud end and is used for carrying out scale identification on the copy image by adopting the scale identification method when a user selects the automatic scale identification, and outputting the identification scale to the scale recording module;
the scale confirming module is arranged at the client and used for confirming the satisfaction degree of the received identification scale, and when the received identification scale is satisfied, the confirmed identification scale is output to the scale recording module after the identification scale is confirmed;
the scale editing module is arranged at the client, and is used for receiving an editing scale edited by a user and outputting the editing scale to the scale recording module when the user selects the scale for manual input; when the comparison identification scale is unsatisfactory, the user-edited modification scale is received, and the modification scale is output to the scale recording module;
the scale recording module is arranged at the cloud end and used for recording a final scale, and the final scale comprises an identification scale, an editing scale or a modification scale.
In one embodiment, the wall door and window generating module comprises an automatic drawing module, a manual drawing module and a wall recording module,
the automatic drawing module is arranged at the cloud end, and is used for automatically drawing the wall body and the door and window according to the final scale and the image when a user selects automatic drawing, and sending the automatically drawn wall body and the door and window to the wall body recording module;
the manual drawing module is arranged at the client, and is used for manually drawing the wall body and the door and window according to the final scale and the image when a user selects manual drawing, and transmitting the manually drawn wall body and the manually drawn door and window to the wall body recording module;
the wall recording module is arranged at the cloud and used for recording the final wall and door and window drawing results, wherein the final wall and door and window drawing results comprise manual drawing of wall and door and window results and automatic drawing of wall and door and window results.
In one embodiment, the three-dimensional model generation module comprises an automatic generation module, a model confirmation module and a model editing module,
the automatic generation module is arranged at the cloud end and is used for generating a three-dimensional model according to the final wall and door and window drawing result and transmitting the three-dimensional model to the model confirmation module;
the model confirmation module is arranged at the client and used for carrying out satisfaction confirmation on the received three-dimensional model, and outputting the three-dimensional module when satisfied;
the model editing module is arranged at the client, and when the user is not satisfied with the three-dimensional model, the wall body and/or the door and window in the three-dimensional model are redrawn and modified, and the modification result is transmitted to the automatic generation module or the wall body recording module.
In another embodiment, when the user is not satisfied with the three-dimensional model, the two-dimensional graph corresponding to the three-dimensional model is provided for the manual drawing module of the wall door and window generating module to carry out wall window re-modification.
Compared with the prior art, the invention has the following effective effects:
according to the scale identification method provided by the embodiment of the invention, the digital area and the scale area are selected by adopting the scale area determination network frame determined by deep learning, the size marking pairs are determined by the bidirectional overlapping degree, and the optimal unit scale representing the corresponding actual size of the unit pixel is determined by integrating the actual size of all the size marking pairs and the size on the graph, so that the scale is determined, the applicability of the scale identification method is higher, the application range is wider, and the identification accuracy is higher.
According to the three-dimensional model building system based on the copy, through the entering and user interaction of the copy, the picture is uploaded through the picture uploading module, the accurate scale is determined through the scale determining module, and then the house type 3D model which can be used for home decoration design is finally produced through the wall door and window generating module and the three-dimensional model generating module.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of a scale identification method provided by the present invention;
FIG. 2 is a diagram of an example of an image to be measured according to the present invention;
FIG. 3 is another exemplary illustration of an image to be measured provided by the present invention;
FIG. 4 is a schematic diagram of a numeric area and a scale area in a training sample provided by the present invention;
FIG. 5 is a scale visual presentation diagram determined by a scale identification method provided by the invention;
FIG. 6 is a schematic diagram of a three-dimensional modeling system based on a copy provided by the present invention;
FIG. 7 is a schematic diagram of an embodiment of a scale determination module according to the present invention;
FIG. 8 is a schematic structural diagram of an embodiment of a wall door and window generating module according to the present invention;
FIG. 9 is a schematic structural diagram of an embodiment of a three-dimensional model generating module according to the present invention;
FIG. 10 is a flow chart of the three-dimensional model building system based on the copy graph provided by the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the scope of the invention.
Example 1
In order to solve the problems that the existing scale identification is inaccurate and has no universality, the embodiment of the invention provides a scale identification method which comprises the following steps of. Fig. 1 is a flowchart of an embodiment of a scale identification method provided by the present invention. As shown in fig. 1, the scale identification method includes the following steps:
s101, acquiring an image to be detected, wherein the image to be detected comprises a dimension marking diagram, and the dimension marking diagram comprises a digital diagram or a scale diagram.
The image to be detected is an image of a scale to be identified, and the image to be detected comprises a target image and a size marking diagram surrounding the inside and the outside of the target image. As shown in fig. 2, the size-labeled graphic is labeled on the periphery of the target image, and specifically includes a digital graphic SI and a scale graphic SII. And as shown in fig. 3, the size-marked image is marked in the target image, and specifically only includes the digital image SIII. There are of course also images to be measured, including the digital representation SI and the scale representation SII as shown in fig. 2, and also including the digital representation SIII as shown in fig. 3.
S102, determining a digital diagram and a scale diagram in the network identification image to be tested by adopting a pre-trained scale region, and selecting the digital region and the scale region by a frame.
In this embodiment, the fast R-CNN, mask R-CNN, SPP-Net, YOLOv3, SSD, M2Det are used as the base network, the weights of the classified networks converged on the ImageNet dataset are used to initialize the base network, and then the initialized base network is subjected to fine tuning of network parameters by using the constructed training sample to determine the pre-trained scale area determination network. Of course, it is also possible to use R-CNN, fast R-CNN, light-Head R-CNN, cascade R-CNN, YOLO, YOLOv2, YOLT, DSSD, FSSD, ESSD, MDSSD, pelee, fire SSD, R-FCN, FPN, DSOD, retinaNet, megDet, refineNet, detNet, SSOD, cornerNet, ZSD (Zero-Shot Object Detection), OSD (One-Shot object Detection) and other networks as the base network.
When a training sample is constructed, each dimension marking graphic label adopts a vector to represent [ label, x, y, w and h ], wherein label represents graphic categories, values are respectively 0 and 1, and digital graphic and scale graphic are respectively represented; and x, y, w and h represent the positions of the rectangular frames corresponding to the labeling graphic representation in a normalized mode, specifically the coordinates (x, y) of the centers of the rectangular frames, and the width w and the height h of the rectangular frames.
As shown in fig. 4, the digital and scale areas of each training sample are typically selected using rectangular boxes. The digital text can be selected by adopting a red rectangular frame as a digital area, and a blue rectangular frame as a ruler area, and it is noted that the rectangular frame marked by the frame selected number is the minimum circumscribed matrix containing the digital text, and two ends of the rectangular frame in the ruler area are required to be aligned with two ends of the ruler line strictly.
For the size marking graphic marked in the target image, although the scale graphic is not included, a rectangular frame can be adopted to select a scale area when a training sample is constructed, and two ends of the rectangular frame corresponding to the scale area are required to be aligned with two ends of a physical body in the writing direction of the digital mark. The entity is an entity corresponding to the numerical indication, and in the house type graph, the entity can be a wall body, a door window and the like.
S103, calculating the first overlapping degree of the digital region relative to the scale region and the second overlapping degree of the scale region relative to the digital region, and screening a group of digital region and scale region with the largest sum of the first overlapping degree and the second overlapping degree as a dimension marking pair.
The number of the digital areas and the scale areas selected by the network frame are determined through the scale areas, when the size labeling and the graphic labeling are relatively dense, the matching relation between the digital areas and the scale areas is not clear, and if the digital areas and the scale areas are directly adopted for scale calculation, the calculation result is wrong. To solve this problem, the embodiment of the present invention defines the concept of a dimensioning pair, i.e. a set of digital areas and scale areas belonging to the same dimensioning diagram form a dimensioning pair.
In an embodiment, the degree of overlap is used to determine the matching relationship of the digital region and the scale region. Specifically, the overlapping degree IOU of the digital region and the scale region can be calculated, and then the size marking pair is determined through overlapping degree threshold screening. Although directly calculating the degree of overlap IOU of one direction can determine the matching relationship of the digital region and the scale region, there is still a bit error in such matching relationship. Therefore, the bidirectional matching overlapping degree is adopted to screen the dimension marking pairs in the invention.
In one embodiment, first, a first degree of overlap IOU of the region relative to the scale region is calculated 1 And a second overlapping degree IOU of the scale region relative to the digital region 2 The method comprises the steps of carrying out a first treatment on the surface of the Then, screening the first overlapping degree IOU 1 And a second overlap IOU 2 The digital region and the scale region with the largest sum are the dimension marking pairs. Matching precision can be improved through bidirectional overlap screening, and more accurate dimension marking pairs are obtained.
In another embodiment, to further improve the matching speed and screening accuracy of the sizing pair, the first overlapping degree IOU is also required before screening the sizing pair 1 And a second overlap IOU 2 Pre-screening is performed to filter out pairs of sizing that deviate too much. That is, the screening the digital region and the scale region with the largest sum of the first overlapping degree and the second overlapping degree as the dimension marking pair includes:
first, the first overlapping degree IOU is subjected to overlapping degree threshold value 1 And the second overlapping degree IOU 2 Screening, and reserving a first overlapping degree IOU larger than an overlapping degree threshold value 1 And a second overlap IOU 2
And then, screening a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree as dimension marking pairs.
In an embodiment, the overlapping degree threshold may be set according to the intensity level of the dimensioning illustration, and is generally not lower than 0.9. Due to the overlap threshold. Because the overlapping degree threshold value is set to be larger than 0.9 and bidirectional overlapping degree judgment is adopted, the dimension marking pair belonging to the same dimension marking diagram can be accurately determined.
And S104, carrying out text recognition on the size marking centering digital region by adopting a text recognition network to obtain the actual size, and calculating the distance of the border of the size marking centering ruler region along the digital writing direction as the size on the figure.
After the dimension marking pair is determined, any text recognition network can be used for text recognition of the digital region in the dimension marking pair, and the actual dimension is obtained. Meanwhile, the on-graph size corresponding to the actual size is also required to be calculated, the pixel width value of the border of the scale area in the digital writing direction is specifically calculated, and the pixel width value is used as the on-graph size.
S105, determining an optimal unit scale representing the actual size corresponding to the unit pixel by combining the actual size of all the dimension marking pairs and the dimension on the graph, and determining a scale according to the optimal unit scale.
The actual size shown in the figure corresponding to the unit pixel is obtained by dividing the actual size by the on-drawing size after the actual size and the on-drawing size of the pair are marked according to the size, and is called a unit scale.
Normally, all unit scales from the same graph should be consistent, but there are often some subtle differences in unit scales calculated for each dimension-marked graph due to misidentification of text numbers, identification deviation of scale positions, inherent drawing errors, image distortion errors caused by shooting, and the like. To solve this problem, the present invention determines an optimal unit scale by counting a plurality of unit scales.
In one embodiment, the step of integrating the actual sizes of all the dimension marking pairs and the dimension determining unit scale on the figure to represent the corresponding actual size of the unit pixel includes:
and fitting the actual sizes of all the size marking pairs and the sizes on the graph by adopting a least square method, wherein the slope of a fitting straight line is an optimal unit scale.
By adopting a least square fitting mode, abnormal values can be effectively eliminated, the actual sizes of all size marking pairs and the sizes on the graph are fully and comprehensively considered, so that an optimal unit scale is determined, and the optimal unit scale is utilized to determine a scale, so that the accuracy of the scale is improved.
In another embodiment, the integrating the actual sizes of all the dimension-marked pairs and the on-drawing size determination means that the unit pixel corresponds to the actual size includes:
calculating a unit scale belonging to each dimension marking pair according to the actual dimension of each dimension marking pair and the dimension on the drawing;
and counting the corresponding unit scales for all the size labels, and taking the unit scale corresponding to the median as the optimal unit scale.
The optimal unit scales of the unit scales of all the dimension marking pairs are determined in a median mode, the actual dimensions and the dimensions on the drawing of all the dimension marking pairs are fully considered, so that the optimal unit scales are determined, the optimal unit scales are used for determining the scale, and the accuracy of the scale is improved.
After the optimal unit scale is determined, the optimal unit scale is taken as a final scale, and the final scale is visually displayed on a dimension marking diagram corresponding to the unit scale closest to the optimal unit scale, as shown in fig. 5.
The scale identification method is not only suitable for the house type drawing field shown in fig. 2-4, but also suitable for the CAD part drawing field. Likewise, the dimension marking illustrations aiming at the inner part and the outer part of the part can be correctly identified and the scale is determined, and the accurate 3D modeling is carried out by utilizing all dimension information of the three views of the part, so that the heavy labor of manual modeling is relieved.
In addition, when owners and decoration designers can only usually take the paper files of the house type drawings, the pictures after the paper drawings are directly shot are inevitably deformed, and as shown in fig. 6, the subsequent digitization is affected. The scale identification method of the invention adopts a deep learning mode to train various scales, and synthesizes all unit scales to determine the scale, thus having stronger adaptability to the situation and still being capable of accurately identifying the scale.
According to the scale identification method provided by the embodiment of the invention, the digital area and the scale area are selected by adopting the scale area determination network frame determined by deep learning, the size marking pairs are determined by the bidirectional overlapping degree, and the optimal unit scale representing the corresponding actual size of the unit pixel is determined by integrating the actual size of all the size marking pairs and the size on the graph, so that the scale is determined, the applicability of the scale identification method is higher, the application range is wider, and the identification accuracy is higher.
Example 2
Fig. 6 is a schematic structural diagram of an embodiment of a three-dimensional modeling system based on a copy provided by the present invention. As shown in fig. 6, the three-dimensional model building system 600 based on the copy graph includes an image uploading module 601, a scale determining module 602, a wall door and window generating module 603 and a three-dimensional model generating module 604.
The image uploading module 601 is configured to receive a copy uploaded by a user, store the copy in a cloud and display an image.
The scale determining module 602 is configured to perform scale recognition on the copy, and includes performing automatic recognition and setting the scale by the user by using the scale recognition method provided in embodiment 1, so as to determine a final scale.
The wall door and window generating module 603 is used for automatically generating the wall and the door and window according to the determined final scale or manually drawing the wall and the door and window to determine the final wall and the door and window.
The three-dimensional model generation module 604 is configured to generate a three-dimensional model according to the determined final wall and door and window.
The image uploading module 601 is disposed at the client, and uploads the house copy to the cloud through the image uploading module of each client, and the description is given below taking the client as an example. The client sends the binary data of the copy image file to the cloud, the cloud generates an image ID after receiving the binary data of the image, stores the binary data stream as the image file to a cloud storage medium, and then returns the URL (Uniform Resource Locator ) address for accessing the image to the client. The client side obtains the picture resource from the URL and displays the picture resource to the user.
In one embodiment, as shown in fig. 7, the scale determination module 602 includes a scale automatic generation module 701, a scale confirmation module 702, and a scale editing module 703, and a scale recording module 704.
The scale automatic generation module 701 is located at the cloud end, and is configured to perform scale identification on the copy by using the scale identification method provided in embodiment 1 when the user selects scale automatic identification, and output the identified scale to the scale recording module.
The scale confirmation module 702 is disposed at the client, and is configured to confirm satisfaction of the received recognition scale, and when the received recognition scale is satisfied, output the confirmed recognition scale to the scale recording module after confirming the recognition scale.
The scale editing module 703 is disposed at the client, and is configured to receive an editing scale edited by a user and output the editing scale to the scale recording module when the user selects the scale for manual input; and when the comparison identification scale is unsatisfactory, the user-edited modification scale is received, and the modification scale is output to the scale recording module.
The scale recording module 704 is disposed at the cloud end, and is configured to record a final scale, where the final scale includes an identification scale, an editing scale, or a modification scale.
The steps and effects of the scale identification method adopted in the scale automatic generation module 701 are the same as those of the scale identification method provided in embodiment 1, and will not be described here again.
The scale determining module 602 automatically generates an identification scale through the scale automatic generating module 701, user confirmation of the identification scale is realized through the scale confirming module 702, and the editing scale and the modification scale input by the user are obtained through the scale editing module 703, so that the user can participate in the scale confirmation process in a man-machine interaction mode, the accuracy of the scale is improved, namely, the scale of the real size on the real graph and the real size is provided, and a stable size foundation is provided for generating the three-dimensional model.
In one embodiment, as shown in fig. 8, the wall door and window generation module 603 includes an automatic drawing module 801, a manual drawing module 802, and a wall recording module 803.
The automatic drawing module 801 is arranged at the cloud end, and is used for automatically drawing the wall body and the door and window according to the final scale and the image when a user selects automatic drawing, and sending the automatically drawn wall body and the door and window to the wall body recording module;
the manual drawing module 802 is disposed at the client, and is configured to manually draw the wall and the door and window according to the final scale and the image when the user selects manual drawing, and send the manually drawn wall and door and window to the wall recording module.
The wall recording module 803 is arranged at the cloud end and used for recording the final wall and door and window drawing results, and the final wall and door and window drawing results comprise manual drawing of wall and door and window results and automatic drawing of wall and door and window results.
The wall door and window generating module 603 automatically draws the wall door and window through the automatic drawing module 801 and manually draws the wall door and window through the manual drawing module 802. Therefore, the user can participate in the generation process of the wall door and window in a man-machine interaction mode, and the generated wall door and window meets the user's wish.
In one embodiment, as shown in FIG. 9, the three-dimensional model generation module 604 includes an automatic generation module 901, a model validation module 902, and a model editing module 903.
The automatic generation module 901 is arranged at the cloud end and is used for generating a three-dimensional model according to the final wall and door and window drawing result and transmitting the three-dimensional model to the model confirmation module;
the model confirmation module 902 is arranged at the client and is used for carrying out satisfaction confirmation on the received three-dimensional model, and outputting the three-dimensional module when satisfied;
the model editing module 903 is arranged at the client, and when the user is not satisfied with the three-dimensional model, redraws and modifies the wall and/or door and window in the three-dimensional model, and transmits the modification result to the automatic generation module or the wall recording module.
The three-dimensional model generating module 604 generates a three-dimensional model according to the wall door and window recognition result through the automatic generating module 901, performs satisfaction confirmation on the three-dimensional model through the model confirming module 902, and performs unsatisfactory position editing through the model editing module 903. Therefore, the user can participate in the three-dimensional model generation process in a man-machine interaction mode, and the generated three-dimensional model accords with the user's will.
In another embodiment, when the user is not satisfied with the three-dimensional model, the two-dimensional graph corresponding to the three-dimensional model is provided for the manual drawing module of the wall door and window generating module to carry out wall window re-modification.
FIG. 10 is a flow chart of the three-dimensional model building system based on the copy graph provided by the invention. As shown in fig. 10, the process of creating a three-dimensional model by using the three-dimensional model creation system based on the copy is as follows:
uploading the copy to the cloud through the picture uploading module, and storing the copy by the cloud and then sending the copy to the client for display.
After the client side successfully displays the house type copying diagram, inquiring whether the user carries out automatic scale identification or not. If the user selects the scale, the information is sent to the cloud, the cloud intelligently assists in automatic scale recognition by means of the scale recognition method provided by the embodiment 1, and the scales and the corresponding sizes are displayed on the copy graph for the user to check, confirm or modify after the scale recognition result is obtained. If the user does not use automatic identification, a default scale and an input box are displayed, and the user aligns the scale by himself and inputs the corresponding actual size. After the user confirms the scale information, the scale information is recorded in the cloud end, and the scale confirmation is completed.
The automatic identification result of the comparison scale is modified similarly to the manual setting of the comparison scale, a user can move the scale through a mouse to align the scale or the wall on the copy, the size of the scale can be adjusted through hot areas at two ends of the scale, the copy can be scaled through a mouse roller, and the actual size corresponding to the scale distance on the copy is filled below the scale. And after the scale information is confirmed, performing self-adaptive scaling on the copy graph for display.
Whether the scale is automatically identified or manually set, the user is inquired whether to automatically identify or manually draw the wall doors and windows after the determined scale information is obtained. If the user selects the method, the information is sent to the cloud, the cloud intelligently assists in automatically identifying the wall doors and windows and the rooms by means of the automatic drawing method of the body doors and windows, and the identification result is returned to the front end and displayed on the copy map base for the user to check, confirm or modify. If the user selects manual drawing, the copy bottom map is displayed in a plane mode, the user automatically selects a wall drawing function to draw all walls of a room on a 2D plane, then components such as doors and windows are added on the corresponding walls, and the type of the room is set.
The modification of the wall door and window recognition result is similar to manual drawing, and a user can select to display a copy graph so as to conveniently check whether the positions of the wall body and the door and window are correct, and select to hide the copy graph so as to check whether the house type drawing is complete or not and whether the room is closed or not. The parameters of the selected wall or door and window elements can be modified, such as wall thickness, door and window length, width, high end and the like.
The automatic drawing method for the wall door and window by the cloud comprises the following steps:
firstly, the wall body and the room on the copy are divided by semantic division, then the size and the position of the door and window are identified by a target detection method and are placed on the corresponding wall body, so that the automatic drawing of the wall body and the door and window is realized.
In the embodiment, the main furniture types and positions in the copy can be identified through a target detection method, and the main furniture types and positions in the copy are displayed in a mode of a planar layout legend, and further, ground default materials can be set for each room according to different room types for display, for example, a guest restaurant adopts a beige ceramic tile, a kitchen bathroom adopts a white ceramic tile, a bedroom adopts a log floor and the like. If the user selects the manual drawing mode, the raw wood floor materials are uniformly initialized, and the raw wood floor materials can be modified by the user.
After the drawing of the doors, windows and rooms of the wall body is completed, 3D modeling is needed according to the identification parameters of the doors, windows and the walls, and the parameters which cannot be identified take a group of conventional experience values for initialization, such as:
the thickness of the outer wall is 200mm, and the thickness of the inner wall is 120mm
Common window is 900mm from the ground, the height is 1200mm, and the thickness is 60mm
The floating window is 450mm away from the ground and 1200mm in height
The length of the common door is 800mm, the height is 2000mm, the thickness is 220mm, and the distance from the ground is 0mm
The terminal or the cloud adopts a 3D rendering graphic frame ThreeJS realized based on WebGL to draw and display a 3D model of a wall body and a door window, a user can check the 3D view and other operations of a specific room to confirm the correctness of the 3D model through rotation and scaling of 3 degrees of freedom, and if the deviation exists, the user can directly select the corresponding element to change the parameters, and can also switch to a 2D mode to modify.
Until the user obtains a satisfactory 3D model by continuously interactively adjusting the hard-fitting parameters, the conversion from the copy diagram to the 3D model is completed, and the user can utilize the 3D model of the house type and the 3D model of the furniture ornament to design a decoration scheme on the basis.
The three-dimensional model building system based on the copy is based on the entering and user interaction behavior of the copy, the picture is uploaded through the picture uploading module, the accurate scale is determined through the scale determining module, and finally the house type 3D model which can be used for house decoration design is produced through the wall door and window generating module and the three-dimensional model generating module.
The foregoing detailed description of the preferred embodiments and advantages of the invention will be appreciated that the foregoing description is merely illustrative of the presently preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents of those embodiments are intended to be included within the scope of the invention.

Claims (8)

1. The scale identification method is characterized by comprising the following steps of:
acquiring an image to be detected, wherein the image to be detected comprises a dimension marking diagram, and the dimension marking diagram comprises a digital diagram and a scale diagram;
determining a digital diagram and a scale diagram in the network identification image to be tested by adopting a pre-trained scale region, and selecting the digital region and the scale region by a frame;
calculating a first overlapping degree of the digital region relative to the scale region and a second overlapping degree of the scale region relative to the digital region;
screening a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree as dimension marking pairs, comprising: firstly, screening the first overlapping degree and the second overlapping degree by utilizing an overlapping degree threshold value, and reserving the first overlapping degree and the second overlapping degree which are larger than the overlapping degree threshold value; then, a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree are screened to be dimension marking pairs;
performing text recognition on the size marking centering digital region by adopting a text recognition network to obtain the actual size, and calculating the distance of the border of the size marking centering ruler region along the digital writing direction as the size on the graph;
and determining an optimal unit scale representing the actual size corresponding to the unit pixel by combining the actual size of all the dimension marking pairs and the dimension on the graph, and determining a scale according to the optimal unit scale.
2. The scale recognition method according to claim 1, wherein the integrating the actual sizes of all the dimension-marked pairs and the on-drawing size determination means that the unit pixels correspond to the actual sizes includes:
and fitting the actual sizes of all the size marking pairs and the sizes on the graph by adopting a least square method, wherein the slope of a fitting straight line is an optimal unit scale.
3. The scale recognition method according to claim 1, wherein the integrating the actual sizes of all the dimension-marked pairs and the on-drawing size determination means that the unit pixels correspond to the actual sizes includes:
calculating a unit scale belonging to each dimension marking pair according to the actual dimension of each dimension marking pair and the dimension on the drawing;
and counting the corresponding unit scales for all the size labels, and taking the unit scale corresponding to the median as the optimal unit scale.
4. The three-dimensional model building system based on the copy is characterized by comprising an image uploading module, a scale determining module, a wall door and window generating module and a three-dimensional model generating module;
the image uploading module is used for receiving the copy image uploaded by the user, storing the copy image in the cloud and displaying the image;
the scale determining module is used for performing scale identification on the copy, and comprises the steps of automatically identifying by adopting the scale identification method shown in any one of claims 1-3 and automatically setting a scale by a user so as to determine a final scale;
the wall door and window generating module is used for automatically generating walls and doors and windows according to the determined final scale or manually drawing the walls and the doors and windows so as to determine the final walls and the doors and windows;
the three-dimensional model generation module is used for generating a three-dimensional model according to the determined final wall and door and window.
5. The three-dimensional model building system based on the copy of claim 4, wherein the scale determining module comprises a scale automatic generating module, a scale confirming module, a scale editing module and a scale recording module;
the automatic scale generating module is arranged at the cloud end, and is used for performing scale identification on the copy by adopting the scale identification method shown in any one of claims 1-3 when a user selects the automatic scale identification, and outputting the identification scale to the scale recording module;
the scale confirming module is arranged at the client and used for confirming the satisfaction degree of the received identification scale, and when the received identification scale is satisfied, the confirmed identification scale is output to the scale recording module after the identification scale is confirmed;
the scale editing module is arranged at the client, and is used for receiving an editing scale edited by a user and outputting the editing scale to the scale recording module when the user selects the scale for manual input; when the comparison identification scale is unsatisfactory, the user-edited modification scale is received, and the modification scale is output to the scale recording module;
the scale recording module is arranged at the cloud end and used for recording a final scale, and the final scale comprises an identification scale, an editing scale or a modification scale.
6. The three-dimensional model building system based on the copy of claim 4, wherein the wall door and window generating module comprises an automatic drawing module, a manual drawing module and a wall recording module,
the automatic drawing module is arranged at the cloud end, and is used for automatically drawing the wall body and the door and window according to the final scale and the image when a user selects automatic drawing, and sending the automatically drawn wall body and the door and window to the wall body recording module;
the manual drawing module is arranged at the client, and is used for manually drawing the wall body and the door and window according to the final scale and the image when a user selects manual drawing, and transmitting the manually drawn wall body and the manually drawn door and window to the wall body recording module;
the wall recording module is arranged at the cloud and used for recording the final wall and door and window drawing results, wherein the final wall and door and window drawing results comprise manual drawing of wall and door and window results and automatic drawing of wall and door and window results.
7. The three-dimensional model creation system based on a copy of claim 4, wherein the three-dimensional model creation module comprises an automatic creation module, a model confirmation module, and a model editing module,
the automatic generation module is arranged at the cloud end and is used for generating a three-dimensional model according to the final wall and door and window drawing result and transmitting the three-dimensional model to the model confirmation module;
the model confirmation module is arranged at the client and used for carrying out satisfaction confirmation on the received three-dimensional model, and outputting the three-dimensional module when satisfied;
the model editing module is arranged at the client, and when the user is not satisfied with the three-dimensional model, the wall body and/or the door and window in the three-dimensional model are redrawn and modified, and the modification result is transmitted to the automatic generation module or the wall body recording module.
8. The system for building a three-dimensional model based on a copy of claim 4, wherein when the user is not satisfied with the three-dimensional model, the two-dimensional map corresponding to the three-dimensional model is provided to the manual drawing module of the wall door and window generating module for wall window re-modification.
CN202010468928.8A 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy Active CN113742810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010468928.8A CN113742810B (en) 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010468928.8A CN113742810B (en) 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy

Publications (2)

Publication Number Publication Date
CN113742810A CN113742810A (en) 2021-12-03
CN113742810B true CN113742810B (en) 2023-08-15

Family

ID=78724162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010468928.8A Active CN113742810B (en) 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy

Country Status (1)

Country Link
CN (1) CN113742810B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882306B (en) * 2022-04-06 2023-08-18 国家基础地理信息中心 Topography scale identification method and device, storage medium and electronic equipment
CN114742881A (en) * 2022-05-16 2022-07-12 佛山欧神诺云商科技有限公司 2D house type graph actual proportion calculation method, device, system and storage medium
CN115238368B (en) * 2022-09-21 2022-12-16 中南大学 Automatic modeling method and medium for pier drawing identification based on computer vision
CN116343253A (en) * 2023-03-13 2023-06-27 苏州威视通智能科技有限公司 CAD drawing length unit and pixel value proportion identification, acquisition and calculation method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0868645A (en) * 1994-08-29 1996-03-12 Nissan Motor Co Ltd Navigation system
US5553209A (en) * 1994-01-28 1996-09-03 Hughes Aircraft Company Method for automatically displaying map symbols
CN102013205A (en) * 2010-11-30 2011-04-13 百度在线网络技术(北京)有限公司 Electronic map marker rendering method and device
CN107958064A (en) * 2017-12-04 2018-04-24 携程旅游网络技术(上海)有限公司 The method, apparatus of map displaying Flight Information, electronic equipment, storage medium
CN108763606A (en) * 2018-03-12 2018-11-06 江苏艾佳家居用品有限公司 A kind of floor plan element extraction method and system based on machine vision
CN108804815A (en) * 2018-06-08 2018-11-13 杭州群核信息技术有限公司 A kind of method and apparatus assisting in identifying wall in CAD based on deep learning
CN109145171A (en) * 2018-07-23 2019-01-04 广州市城市规划勘测设计研究院 A kind of multiple dimensioned map data updating method
CN110032938A (en) * 2019-03-12 2019-07-19 北京汉王数字科技有限公司 A kind of Tibetan language recognition method, device and electronic equipment
US10445569B1 (en) * 2016-08-30 2019-10-15 A9.Com, Inc. Combination of heterogeneous recognizer for image-based character recognition
CN110414477A (en) * 2019-08-06 2019-11-05 广东三维家信息科技有限公司 Image scale detection method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553209A (en) * 1994-01-28 1996-09-03 Hughes Aircraft Company Method for automatically displaying map symbols
JPH0868645A (en) * 1994-08-29 1996-03-12 Nissan Motor Co Ltd Navigation system
CN102013205A (en) * 2010-11-30 2011-04-13 百度在线网络技术(北京)有限公司 Electronic map marker rendering method and device
US10445569B1 (en) * 2016-08-30 2019-10-15 A9.Com, Inc. Combination of heterogeneous recognizer for image-based character recognition
CN107958064A (en) * 2017-12-04 2018-04-24 携程旅游网络技术(上海)有限公司 The method, apparatus of map displaying Flight Information, electronic equipment, storage medium
CN108763606A (en) * 2018-03-12 2018-11-06 江苏艾佳家居用品有限公司 A kind of floor plan element extraction method and system based on machine vision
CN108804815A (en) * 2018-06-08 2018-11-13 杭州群核信息技术有限公司 A kind of method and apparatus assisting in identifying wall in CAD based on deep learning
CN109145171A (en) * 2018-07-23 2019-01-04 广州市城市规划勘测设计研究院 A kind of multiple dimensioned map data updating method
CN110032938A (en) * 2019-03-12 2019-07-19 北京汉王数字科技有限公司 A kind of Tibetan language recognition method, device and electronic equipment
CN110414477A (en) * 2019-08-06 2019-11-05 广东三维家信息科技有限公司 Image scale detection method and device

Also Published As

Publication number Publication date
CN113742810A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN113742810B (en) Scale identification method and three-dimensional model building system based on copy
Deng et al. Automatic indoor construction process monitoring for tiles based on BIM and computer vision
CN111091538B (en) Automatic identification and defect detection method and device for pipeline welding seams
CN111080804B (en) Three-dimensional image generation method and device
EP3226211B1 (en) Method and program of automatic three-dimensional solid modeling based on two-dimensional drawing
CN105830122B (en) Automation kerf for carrying out 3D rock core digitization modeling according to computed tomographic scanner (CTS) image corrects
Rashidi et al. Generating absolute-scale point cloud data of built infrastructure scenes using a monocular camera setting
CN110956138B (en) Auxiliary learning method based on home education equipment and home education equipment
CN107978017B (en) Indoor structure rapid modeling method based on frame line extraction
JP6781432B2 (en) Radio wave propagation simulation model creation method, creation system, creation device and creation program
US20080111815A1 (en) Modeling System
Joris et al. HemoVision: An automated and virtual approach to bloodstain pattern analysis
US20230035477A1 (en) Method and device for depth map completion
Tarsha Kurdi et al. Comparison of LiDAR building point cloud with reference model for deep comprehension of cloud structure
US10970833B2 (en) Pipe image feature analysis using calibration data
CN117132564A (en) YOLOv 3-based sapphire substrate surface defect detection method and system
CN114332741B (en) Video detection method and system for building digital twins
Sahin Planar segmentation of indoor terrestrial laser scanning point clouds via distance function from a point to a plane
US20230290090A1 (en) Searchable object location information
JPS628688A (en) Image recognizing method
Abdelhafiz et al. Automatic texture mapping mega-projects
AU2018102253A4 (en) Improved insurance system
NL2032442B1 (en) Method for designing and producing a renovation element for a windowsi ll
JP3645404B2 (en) Method and apparatus for recognizing door from architectural drawing
JP3679241B2 (en) Construction drawing recognition method and recognition apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant