CN113742810A - Scale identification method and three-dimensional model building system based on copy graph - Google Patents

Scale identification method and three-dimensional model building system based on copy graph Download PDF

Info

Publication number
CN113742810A
CN113742810A CN202010468928.8A CN202010468928A CN113742810A CN 113742810 A CN113742810 A CN 113742810A CN 202010468928 A CN202010468928 A CN 202010468928A CN 113742810 A CN113742810 A CN 113742810A
Authority
CN
China
Prior art keywords
scale
module
size
dimensional model
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010468928.8A
Other languages
Chinese (zh)
Other versions
CN113742810B (en
Inventor
宋璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202010468928.8A priority Critical patent/CN113742810B/en
Publication of CN113742810A publication Critical patent/CN113742810A/en
Application granted granted Critical
Publication of CN113742810B publication Critical patent/CN113742810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Optimization (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a scale recognition method, which is characterized in that a digital area and a scale area are selected by a network frame determined by a scale area determined by deep learning, a size marking pair is determined by bidirectional overlapping degree, and then an optimal unit scale representing the corresponding actual size of a unit pixel is determined by integrating the actual size of all the size marking pairs and the size on a graph so as to determine the scale. The three-dimensional model building system based on the copy diagram uploads the image through the image uploading module, determines an accurate scale through the scale determining module, and finally generates a house type 3D model for house decoration design through the wall body door and window generating module and the three-dimensional model generating module through the entrance and user interaction behaviors of the copy diagram.

Description

Scale identification method and three-dimensional model building system based on copy graph
Technical Field
The invention relates to the field of image processing, in particular to a scale identification method and a three-dimensional model building system based on a copy diagram.
Background
The scale is a necessary graphical technical language in the drawing field and is used for indicating the corresponding relation between an object on a drawing and an actual physical size. Where the numbers are arabic numbers, referring to the actual physical dimensions of the corresponding scale, the units are typically millimeters. The scale mark is generally I-shaped or Jiong-shaped, the line segment parallel to the number direction indicates the direction of the scale and the corresponding size on the drawing, and the two ends of the scale mark are provided with cutting mark line segments perpendicular to the line segment to indicate the starting and stopping positions of the scale. Typically, a drawing sheet will have multiple scales to identify the dimensions of the various locations.
At present, informatization of an object described by a drawing mainly depends on manual alignment of a scale and input of a corresponding size, so that on one hand, deviation of manual alignment exists, and on the other hand, improvement of production efficiency is not facilitated.
Patent application with application publication number CN 110414477a discloses an image scale detection method and device, which comprises the following steps: (1) acquiring an image to be detected, wherein the image to be detected comprises a target size labeling graphic diagram, and the target size labeling graphic diagram comprises a size text sub-graphic diagram and a size boundary line sub-graphic diagram corresponding to the size text sub-graphic diagram; (2) identifying a size text in the target size labeling diagram to obtain an actual size; (3) detecting the position of a dimension boundary line in the target dimension marking graph, and determining the dimension on the graph according to the position of the dimension boundary line; (4) and determining the scale of the image to be measured according to the actual size and the size on the graph. This patent application has solved the problem of artifical mark deviation and has improved the generation efficiency to a certain extent, but still a lot of obvious not enough specifically include:
the method comprises the following steps: when the target size label graph of the image is determined in the step (1), only the label graph which is provided with a standard ruler consisting of numbers, size lines and size boundary lines and is labeled on the outer side of the whole image can be determined, and aiming at the label graph which is not provided with the size lines and the size boundary lines, is provided with only the numbers and is labeled in the image as shown in fig. 3, the image scale detection method cannot process the label graph, namely the image scale detection method has strong application limitation.
And (2) shortage two: when the boundary position is determined in the step (3), detecting the pixel sum of each sequence vertical to the labeling direction from the two ends of the size text graphic to the two sides, when the difference value of the pixel sums of two adjacent sequences is larger than a preset threshold value, the coordinate information of the text sub-icon corresponding sequence far away from the size is taken as the position of the size boundary line, the method can only detect the condition that the number mark is contained in the size boundary, and aiming at the condition that the number mark is large and is arranged outside the size boundary, the method of judging the size boundary lines by detecting pixels of each sequence perpendicular to the labeling direction from both ends of the size text figure to both sides is apparently unable to identify the size boundary lines in this case, or the dimension boundary of other scales is directly used as the dimension boundary of the current scale, that is, the image scale detection method can miss detection or have wrong detection results when the scales are detected.
And thirdly, when the drawing is slightly inclined or rotated, the size limit cannot be determined in the step (3), and the identification of the copied drawing is disturbed due to the deviation of the scale and the noise of the picture, so that the scale cannot be accurately identified, namely the method for detecting the image scale has strong application limitation.
And (4) only removing the edge text when the scale is determined according to the actual size and the size on the graph, selecting the middle text for output, easily removing the result of correct edge recognition, and displaying the text which is mistakenly recognized in the middle as a final result to cause a wrong recognition result.
Disclosure of Invention
In view of the above, the present invention provides a scale recognition method to solve the problems of inaccurate scale recognition and lack of universality.
The invention also aims to provide a three-dimensional model building system based on the copy diagram, so as to solve the problem that the marked size does not correspond to the real physical size due to the fact that the three-dimensional model is directly built according to the copy diagram.
In a first aspect, to achieve the above object, an embodiment of the present invention provides a scale identifying method, including:
acquiring an image to be detected, wherein the image to be detected comprises a size marking graphic representation, and the size marking graphic representation comprises a digital graphic representation or a scale graphic representation;
determining a digital graphic representation and a scale graphic representation in the image to be detected by a network identification method by adopting a pre-trained scale area, and selecting a digital area and a scale area;
calculating a first overlapping degree of the digital area relative to the scale area and a second overlapping degree of the scale area relative to the digital area, and screening a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree as a size labeling pair;
performing text recognition on the digital area in the size marking center by adopting a text recognition network to obtain an actual size, and calculating the distance of the border of the scale area in the size marking center along the digital writing direction as the size on the graph;
and (4) determining an optimal unit scale representing the corresponding actual size of the unit pixel by integrating the actual sizes of all the size labeling pairs and the sizes on the graph, and determining a scale according to the optimal unit scale.
In one embodiment, the integrating the actual dimensions of all the dimensioning pairs and the on-graph dimensions to determine a unit scale representing the corresponding actual dimensions of the unit pixel comprises:
and fitting the actual sizes of all size labeling pairs and the sizes on the graph by adopting a least square method, wherein the slope of a fitting straight line is an optimal unit scale.
In another embodiment, the integrating the actual dimensions of all the dimensioning pairs and the on-graph dimensions to determine a unit scale representing the corresponding actual dimensions of the unit pixel comprises:
calculating a unit scale belonging to each size labeling pair according to the actual size of each size labeling pair and the size on the graph;
and (4) counting the corresponding unit scales of all the size labels, and taking the unit scale corresponding to the median as the optimal unit scale.
In another embodiment, the screening the set of the number area and the scale area with the largest sum of the first overlapping degree and the second overlapping degree as the size marking pair includes:
firstly, screening the first overlapping degree and the second overlapping degree by using an overlapping degree threshold value, and reserving the first overlapping degree and the second overlapping degree which are greater than the overlapping degree threshold value;
then, a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree are screened to be size marking pairs.
In a second aspect, to achieve the above another object, an embodiment of the present invention provides a system for building a three-dimensional model based on a copy diagram, including an image uploading module, a scale determining module, a wall door and window generating module, and a three-dimensional model generating module;
the image uploading module is used for receiving the copying image uploaded by the user, storing the copying image in a cloud and displaying the image;
the scale determining module is used for carrying out scale identification on the copy picture, and comprises the steps of carrying out automatic identification by adopting the scale identification method and setting a scale by a user to determine a final scale;
the wall door and window generation module is used for automatically generating walls and doors and windows or manually drawing the walls and the doors and windows according to the determined final scale so as to determine the final walls and the doors and windows;
and the three-dimensional model generation module is used for generating a three-dimensional model according to the determined final wall body and the doors and windows.
In one embodiment, the scale determining module comprises a scale automatic generation module, a scale confirmation module, a scale editing module and a scale recording module;
the automatic scale generation module is arranged at the cloud end and is used for carrying out scale identification on the copy picture by adopting the scale identification method and outputting the identification scale to the scale recording module when a user selects the automatic scale identification;
the scale confirmation module is arranged at the client and used for confirming the satisfaction degree of the received identification scale, and when the identification scale is satisfied, the confirmed identification scale is output to the scale recording module after the identification scale is confirmed;
the scale editing module is arranged at the client and used for receiving the editing scale edited by the user and outputting the editing scale to the scale recording module when the user selects the scale to manually input; when the identified scale is not satisfactory, the scale modification module is used for receiving a modified scale edited by a user and outputting the modified scale to the scale recording module;
the scale recording module is arranged at the cloud end and used for recording a final scale, and the final scale comprises an identification scale, an editing scale or a modification scale.
In one embodiment, the wall door and window generation module comprises an automatic drawing module, a manual drawing module and a wall recording module,
the automatic drawing module is arranged at the cloud end and used for automatically drawing the wall body and the doors and windows according to the final scale and the images when the user selects automatic drawing, and sending the automatically drawn wall body and the doors and windows to the wall body recording module;
the manual drawing module is arranged at the client and used for manually drawing the wall body, the door and the window according to the final scale and the image when the user selects manual drawing, and sending the manually drawn wall body, the door and the window to the wall body recording module;
the wall body record module is arranged at the high in the clouds for record final wall body and door and window drawing results, final wall body and door and window drawing results include manual wall body and door and window drawing results and automatic wall body and door and window drawing results.
In one embodiment, the three-dimensional model generation module comprises an automatic generation module, a model validation module, and a model editing module,
the automatic generation module is arranged at the cloud end and used for generating a three-dimensional model according to the final wall body and door and window drawing result and transmitting the three-dimensional model to the model confirmation module;
the model confirmation module is arranged at the client and used for confirming the satisfaction of the received three-dimensional model and outputting the three-dimensional module when the satisfaction is met;
the model editing module is arranged at the client, redraws and modifies the wall and/or the door and window in the three-dimensional model when the user is not satisfied with the three-dimensional model, and transmits the modification result to the automatic generation module or the wall recording module.
In another embodiment, when the user is not satisfied with the three-dimensional model, the two-dimensional graph corresponding to the three-dimensional model is provided to the manual drawing module of the wall door and window generation module for re-modifying the wall window.
Compared with the prior art, the invention has the effective effects of at least:
according to the scale identification method provided by the embodiment of the invention, the digital area and the scale area are selected by adopting the scale area determined by deep learning to determine the network frame, the size marking pairs are determined by the bidirectional overlapping degree, and then the optimal unit scale representing the corresponding actual size of the unit pixel is determined by integrating the actual sizes of all the size marking pairs and the sizes on the graph, so that the scale is determined, and the scale identification method has the advantages of stronger adaptability, wider application range and higher identification accuracy.
According to the three-dimensional model building system based on the copy diagram, the copy diagram is uploaded through the image uploading module through the input and user interaction behaviors of the copy diagram, the accurate scale is determined through the scale determining module, and finally the house type 3D model which can be used for house decoration design is produced through the wall body door and window generating module and the three-dimensional model generating module.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of one embodiment of a scale recognition method provided by the present invention;
FIG. 2 is a diagram of an exemplary image to be measured according to the present invention;
FIG. 3 is another exemplary diagram of an image under test provided by the present invention;
FIG. 4 is a schematic diagram of the number and scale regions in a training sample provided by the present invention;
FIG. 5 is a scale visualization representation determined by the scale recognition method provided by the present invention;
FIG. 6 is a schematic structural diagram of an embodiment of a three-dimensional modeling system based on a copy diagram according to the present invention;
FIG. 7 is a schematic structural diagram of an embodiment of a scale determining module provided in the present invention;
FIG. 8 is a schematic structural view of an embodiment of a wall door/window generation module according to the present invention;
FIG. 9 is a schematic structural diagram of an embodiment of a three-dimensional model generation module provided by the present invention;
FIG. 10 is a flow chart of the three-dimensional model building system based on the copy map for building the three-dimensional model according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
In order to solve the problems that the existing scale is inaccurate in identification and does not have universality, the embodiment of the invention provides a scale identification method which comprises the following steps. Fig. 1 is a flowchart of a scale recognition method according to an embodiment of the present invention. As shown in fig. 1, the scale recognition method includes the steps of:
s101, obtaining an image to be detected, wherein the image to be detected comprises a size marking graphic representation, and the size marking graphic representation comprises a digital graphic representation or a ruler graphic representation.
The image to be detected is an image of a scale to be identified, and the image to be detected comprises a target image and size marking graphic diagrams surrounding the inside and the outside of the target image. As shown in fig. 2, the image to be measured has a size label graph marked on the periphery of the target image, and specifically includes a digital graph SI and a scale graph SII. Then, as shown in fig. 3, the size label graph is labeled in the target image, and specifically includes only the digital graph SIII. Of course, there are some images to be measured, including the digital graph SI and the scale graph SII shown in fig. 2, and also including the digital graph SIII shown in fig. 3.
S102, determining a digital graphic representation and a scale graphic representation in the image to be detected by the network identification through the pre-trained scale area, and selecting the digital area and the scale area.
In the embodiment, fast R-CNN, Mask R-CNN, SPP-Net, YOLOv3, SSD and M2Det are used as basic networks, the basic networks are initialized according to the weight of a converged classification network on an ImageNet data set, and then network parameters of the initialized basic networks are finely adjusted by using constructed training samples to determine a pre-trained scale region to determine the network. Of course, it is also possible to use networks such as R-CNN, Fast R-CNN, Light-Head R-CNN, Cascade R-CNN, YOLO, YOLov2, YLOLT, DSSD, FSSD, ESSD, MDSSD, Pele, Fire SSD, R-FCN, FPN, DSOD, RetinaNet, MegDet, RefineNet, DetNet, SSOD, CornerNet, ZSD (Zero-Shot Object Detection), OSD (One-Shot Object Detection), etc. as the base network.
When a training sample is constructed, each dimension labeling graphic label adopts a vector to represent [ label, x, y, w, h ], wherein label represents a graphic category, values are respectively 0 and 1, and the label represents a digital graphic and a ruler graphic respectively; and x, y, w and h represent the positions of the rectangular frames corresponding to the labeled graphs in a normalized mode, specifically, the coordinates (x, y) of the centers of the rectangular frames, the width w and the height h of the rectangular frames.
As shown in fig. 4, the number area and the scale area of each training sample are generally selected using rectangular boxes. The digital text can be selected as a digital region by adopting a red rectangular frame, the scale can be selected as a scale region by adopting a blue rectangular frame, the rectangular frame marked by the frame selection number is the minimum external matrix containing the digital text, and two ends of the rectangular frame in the scale region are strictly aligned with two ends of the scale line.
Although the scale graphic representation is not included, when a training sample is constructed, a rectangular frame can be used to select a scale region, and two ends of the rectangular frame corresponding to the scale region need to be aligned with two ends of the entity of which the number indicates the writing direction. The entity is the entity corresponding to the number mark, and in the house type figure, the entity can be a wall body, a door window and the like.
S103, calculating a first overlapping degree of the digital area relative to the scale area and a second overlapping degree of the scale area relative to the digital area, and screening a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree as a size labeling pair.
The number of the digital areas and the scale areas which are selected by the network frame are determined through the scale areas, when the size marking graphs are densely labeled, the matching relation between the digital areas and the scale areas is not clear, and the calculation result is wrong if the digital areas and the scale areas are directly adopted for scale calculation. To solve this problem, the embodiment of the present invention defines a dimensioning pair concept, that is, a group of number regions and scale regions belonging to the same dimensioning diagram form a dimensioning pair.
In an embodiment, the overlapping degree is used to determine the matching relationship between the number region and the scale region. Specifically, the overlapping degree IOU of the digital area and the scale area can be calculated, and then the size marking pair is determined by screening the overlapping degree threshold. Although the matching relationship between the digital region and the scale region can be determined by directly calculating the overlapping degree IOU in one direction, the matching relationship still has a little error. Therefore, the invention adopts the bidirectional matching overlapping degree to screen the size marking pair.
In one embodiment, first, a first degree of overlap IOU of a region with respect to a scale region is calculated1And a second degree of overlap IOU of the scale region with respect to the number region2(ii) a Then screening the first overlapping degree IOU1And a second degree of overlap IOU2The group of the number area and the scale area with the largest sum is a size labeling pair. The matching precision can be improved through bidirectional overlapping degree screening, and more accurate size marking pairs are obtained.
In another embodiment, to further improve the matching speed and the screening accuracy of the size-labeled pair, the first overlapping degree IOU is required to be matched before screening the size-labeled pair1And a second degree of overlap IOU2And performing pre-screening to filter out the size marking pairs with too large deviation. That is, the size of a group of digital area and scale area with the largest sum of the first overlapping degree and the second overlapping degree is screenedThe annotation pairs include:
firstly, the first overlapping degree IOU is subjected to overlapping degree threshold1And the second degree of overlap IOU2Screening is carried out, and a first overlapping degree IOU larger than an overlapping degree threshold value is reserved1And a second degree of overlap IOU2
Then, a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree are screened to be size marking pairs.
In an embodiment, the overlap threshold may be set according to the density of the labeled graph, and is generally not lower than 0.9. Due to the overlap threshold. Because the threshold value of the overlapping degree is set to be larger than 0.9 and the bidirectional overlapping degree judgment is adopted, the size marking pair belonging to the same size marking graphic diagram can be accurately determined.
And S104, performing text recognition on the digital area in the size marking center by adopting a text recognition network to obtain the actual size, and calculating the distance of the frame of the scale area in the size marking center along the digital writing direction as the size on the graph.
After the size marking pair is determined, any text recognition network can be adopted to perform text recognition on the numerical region in the size marking pair to obtain the actual size. Meanwhile, the on-drawing size corresponding to the actual size needs to be calculated, specifically, the pixel width value of the border of the scale area in the size marking center along the digital writing direction is calculated, and the pixel width value is used as the on-drawing size.
And S105, determining an optimal unit scale representing the actual size corresponding to the unit pixel by integrating the actual sizes of all the size labeling pairs and the sizes on the graph, and determining a scale according to the optimal unit scale.
After the actual size and the on-map size of the pair are labeled according to the size, the actual size can be divided by the on-map size, and the actual size shown on the map corresponding to the unit pixel can be obtained, which is called a unit scale.
Normally, all unit scales from the same figure should be identical, but there are some slight differences in the unit scales calculated for each size label figure due to misrecognition of text numbers, recognition deviation of scale positions, inherent drawing errors, image distortion errors caused by photographing, and the like. To solve this problem, the present invention determines an optimal unit scale by counting a plurality of unit scales.
In one embodiment, the integrating the actual dimensions of all the dimensioning pairs and the on-graph dimensions to determine a unit scale representing the corresponding actual dimensions of the unit pixel comprises:
and fitting the actual sizes of all size labeling pairs and the sizes on the graph by adopting a least square method, wherein the slope of a fitting straight line is an optimal unit scale.
By adopting a least square fitting mode, abnormal values can be effectively eliminated, the actual sizes of all size labeling pairs and the sizes on the graph are fully and comprehensively considered, the optimal unit scale is determined, the scale is determined by utilizing the optimal unit scale, and the accuracy of the scale is improved.
In another embodiment, the integrating the actual dimensions of all the dimensioning pairs and the on-graph dimensions to determine a unit scale representing the corresponding actual dimensions of the unit pixel comprises:
calculating a unit scale belonging to each size labeling pair according to the actual size of each size labeling pair and the size on the graph;
and (4) counting the corresponding unit scales of all the size labels, and taking the unit scale corresponding to the median as the optimal unit scale.
The optimal unit scale of the unit scale of all the size marking pairs is determined by adopting a median mode, the actual size of all the size marking pairs and the size on the graph are fully considered, the optimal unit scale is determined by utilizing the optimal unit scale, and the accuracy of the scale is improved.
After the optimal unit scale is determined, the optimal unit scale is used as a final scale, and the final scale is visually displayed on the size label graph corresponding to the unit scale closest to the optimal unit scale, as shown in fig. 5.
The scale recognition method is not only suitable for the field of house type drawings shown in figures 2-4, but also suitable for the field of CAD part drawing. Similarly, the dimension marking diagrams inside and outside the part can be correctly identified to determine a scale, and all dimension information of three views of the part is utilized to carry out accurate 3D modeling, so that the heavy labor amount of manual modeling is relieved.
In addition, when owners and decoration designers usually only can take paper documents of house-type drawings, the pictures directly photographed on the paper drawings inevitably have some deformation, as shown in fig. 6, which affects subsequent digitalization. The scale recognition method of the invention trains scales in various forms by adopting a deep learning mode, and synthesizes all unit scales to determine the scale, so that the scale recognition method has stronger adaptability to the situation and can still accurately recognize the scale.
According to the scale identification method provided by the embodiment of the invention, the digital area and the scale area are selected by adopting the scale area determined by deep learning to determine the network frame, the size marking pairs are determined by the bidirectional overlapping degree, and then the optimal unit scale representing the corresponding actual size of the unit pixel is determined by integrating the actual sizes of all the size marking pairs and the sizes on the graph, so that the scale is determined, and the scale identification method has the advantages of stronger adaptability, wider application range and higher identification accuracy.
Example 2
FIG. 6 is a schematic structural diagram of an embodiment of a three-dimensional model building system based on a copy diagram according to the present invention. As shown in fig. 6, the three-dimensional model building system 600 based on a copy diagram includes an image uploading module 601, a scale determining module 602, a wall door and window generating module 603, and a three-dimensional model generating module 604.
The image uploading module 601 is configured to receive a copy image uploaded by a user, store the copy image in a cloud, and display the image.
The scale determining module 602 is configured to perform scale recognition on the copy image, where the scale recognition includes performing automatic recognition by using the scale recognition method provided in embodiment 1 and setting a scale by a user to determine a final scale.
The wall door and window generating module 603 is configured to automatically generate a wall and doors and windows according to the determined final scale or manually draw the wall and doors and windows to determine a final wall and doors and windows.
The three-dimensional model generation module 604 is configured to generate a three-dimensional model according to the determined final wall and the doors and windows.
The image upload module 601 is disposed at the client, and uploads the house type copy to the cloud through the image upload module of each client, which is described below by taking the client as an example. The client sends the binary data of the copy picture file to the cloud, the cloud generates a picture ID after receiving the binary data of the picture, stores the binary data stream as a picture file to a cloud storage medium, and then returns a URL (Uniform Resource Locator) address for accessing the picture to the client. And the client acquires the picture resource from the URL and displays the picture resource to the user.
In one embodiment, as shown in fig. 7, the scale determination module 602 includes a scale automatic generation module 701, a scale confirmation module 702, and a scale editing module 703 and a scale recording module 704.
The automatic scale generation module 701 is arranged at the cloud end, and when the user selects automatic scale identification, the automatic scale generation module is used for performing scale identification on the copy image by using the scale identification method provided in embodiment 1 and outputting the identification scale to the scale recording module.
The scale confirmation module 702 is disposed at the client, and configured to perform satisfaction confirmation on the received identification scale, and when the identification scale is satisfied, output the confirmed identification scale to the scale recording module after confirming the identification scale.
The scale editing module 703 is arranged at the client, and is used for receiving the editing scale edited by the user and outputting the editing scale to the scale recording module when the user selects the scale for manual input; and when the identified scale is not satisfied, the system is used for receiving the modified scale edited by the user and outputting the modified scale to the scale recording module.
The scale recording module 704 is located in the cloud for recording the final scale, which includes identifying the scale, editing the scale, or modifying the scale.
The method steps and the achieved effect of the scale recognition method adopted in the scale automatic generation module 701 are the same as those of the scale recognition method provided in embodiment 1, and are not described herein again.
The scale determining module 602 automatically generates the identification scale through the scale automatic generating module 701, realizes user confirmation of the identification scale through the scale confirming module 702, and obtains the editing scale and the modification scale input by the user through the scale editing module 703, so that the user can participate in the scale confirming process through a man-machine interaction mode, the accuracy of the scale is improved, namely, the ratio of the real on-graph size and the real size is provided, and a stable size basis is provided for generating the three-dimensional model.
In one embodiment, as shown in fig. 8, the wall door and window generation module 603 includes an automatic drawing module 801, a manual drawing module 802, and a wall recording module 803.
The automatic drawing module 801 is arranged at the cloud end and used for automatically drawing the wall body and the doors and windows according to the final scale and the image when the user selects automatic drawing, and sending the automatically drawn wall body and the doors and windows to the wall body recording module;
the manual drawing module 802 is arranged at the client side and used for manually drawing the wall body, the door and the window according to the final scale and the image and sending the manually drawn wall body, the door and the window to the wall body recording module when the user selects manual drawing.
Wall body record module 803 is located at the high in the clouds for record final wall body and door and window drawing result, final wall body and door and window drawing result include manual drawing wall body and door and window result and draw wall body and door and window result automatically.
The wall door and window generation module 603 automatically draws wall doors and windows through the automatic drawing module 801, and manually draws wall doors and windows through the manual drawing module 802. Therefore, the user can participate in the generation process of the wall door and window in a man-machine interaction mode, and the generated wall door and window can meet the will of the user.
In one embodiment, as shown in FIG. 9, the three-dimensional model generation module 604 includes an auto generation module 901, a model validation module 902, and a model editing module 903.
The automatic generation module 901 is arranged at the cloud end and used for generating a three-dimensional model according to the final wall body and door and window drawing result and transmitting the three-dimensional model to the model confirmation module;
the model confirmation module 902 is arranged at the client and used for confirming the satisfaction of the received three-dimensional model and outputting the three-dimensional model when the satisfaction is met;
the model editing module 903 is arranged at the client, redraws and modifies the wall and/or the door and window in the three-dimensional model when the user is not satisfied with the three-dimensional model, and transmits the modification result to the automatic generation module or the wall recording module.
The three-dimensional model generation module 604 generates a three-dimensional model according to the wall door and window recognition result through the automatic generation module 901, confirms the satisfaction degree of the three-dimensional model through the model confirmation module 902, and edits the unsatisfied position through the model editing module 903. Therefore, the user can participate in the three-dimensional model generation process in a man-machine interaction mode, and the generated three-dimensional model conforms to the will of the user.
In another embodiment, when the user is not satisfied with the three-dimensional model, the two-dimensional graph corresponding to the three-dimensional model is provided to the manual drawing module of the wall door and window generation module for re-modifying the wall window.
FIG. 10 is a flow chart of the three-dimensional model building system based on the copy map for building the three-dimensional model according to the present invention. As shown in fig. 10, the process of building a three-dimensional model using the three-dimensional model building system based on a copy map is as follows:
the copying picture is uploaded to the cloud end through the picture uploading module, and the cloud end stores the copying picture and then sends the copying picture to the client end for displaying.
And after the client successfully displays the house type copy picture, inquiring whether the automatic scale identification is carried out or not from the user. If the user selects yes, the information is sent to the cloud, the cloud intelligently assists in automatic scale identification by means of the scale identification method provided in embodiment 1, and after a scale identification result is obtained, the scale and the corresponding size are displayed on the copy diagram for the user to check, confirm or modify. If the user does not use automatic identification, a default scale and an input box are displayed, and the user aligns the scale by himself and inputs the corresponding actual size. And after the user confirms the scale information, recording the scale information in the cloud end to finish scale confirmation.
The modification of the scale automatic identification result is similar to the manual scale setting process, a user can move the scale through a mouse to align the scale or the wall on the copy picture, the size of the scale can be adjusted through hot zones at two ends of the scale, the copy picture can be zoomed through a mouse roller, and the actual size corresponding to the distance between the scale and the wall on the picture is filled below the scale. And after the scale information is confirmed, the copy picture is subjected to self-adaptive scaling for displaying.
Whether the scale is automatically identified or manually set, after the determined scale information is obtained, the user is inquired whether to automatically identify the wall door and window or manually draw the wall door and window. If the user selects yes, the information is sent to the cloud end, the cloud end intelligently assists in automatically identifying the wall doors, the windows and the rooms by means of an automatic drawing method of the body doors and the windows, identification results are returned to the front end and displayed on a copy map base map for the user to check, confirm or modify. If the user selects manual drawing, the copy bottom picture is displayed in a plane mode, the user selects a wall drawing function to draw all walls of a room on a 2D plane, then components such as doors and windows are added on the corresponding walls, and the type of the room is set.
Modification of the wall door and window recognition result is similar to manual drawing, a user can select to display a copy picture so as to conveniently check whether the positions of the wall body and the door and window are correct, and select to hide the copy picture so as to check whether the house type drawing is complete or not and whether a room is closed or not. The parameters of the wall or the door and window can be modified by selecting the elements of the wall or the door and window, such as the thickness of the wall, the length, the width, the high end and the like of the door and window.
The method for automatically drawing the wall doors and windows by the cloud comprises the following steps:
firstly, obtaining the partition of a wall body and a room on a copy picture through semantic partition, then identifying the size and the position of a door and a window through a target detection method, and placing the door and the window on a corresponding wall body to realize the automatic drawing of the wall body and the door and the window.
In the embodiment, the main furniture types and positions in the copy picture can be identified by a target detection method, the copy picture is displayed in a mode of arranging legends on a plane, and furthermore, ground default materials can be set for each room according to different room types for displaying, for example, a customer and restaurant are made of beige tiles, a kitchen and a bathroom are made of white tiles, and a bedroom is made of a wood floor. If the user selects a manual drawing mode, the original wood floor materials are uniformly initialized, and the user can modify the wood floor materials.
After finishing drawing of wall door and window and room, still need carry out 3D according to the discernment parameter of door and window and wall body and model, the parameter that fails to discern takes a set of conventional empirical value to initialize, if:
the thickness of the outer wall is 200mm, and the thickness of the inner wall is 120mm
The common window is 900mm above the ground, the height is 1200mm, and the thickness is 60mm
The floating window is 450mm above the ground and 1200mm in height
The length of a common door is 800mm, the height is 2000mm, the thickness is 220mm, and the ground clearance is 0mm
The terminal or the cloud end adopts a 3D rendering graphic frame ThreeJS realized based on WebGL to draw and display the 3D model of the wall body and the door and window, a user can confirm the correctness of the 3D model through the operations of rotation and scaling of 3 degrees of freedom, checking the 3D view of a specific room and the like, if deviation exists, the corresponding element can be directly selected for parameter change, and the 2D mode can be switched to for modification.
And the user can use the 3D model of the house type and the 3D model of the furniture ornaments to design a decoration scheme on the basis that the conversion from the copy diagram to the 3D model is completed until the user obtains the satisfied 3D model by continuously interactively adjusting the hard-mounting parameters.
The three-dimensional model building system based on the copy diagram uploads the image through the image uploading module, determines an accurate scale through the scale determining module, and finally generates a house type 3D model which can be used for house decoration design through the wall body door and window generating module and the three-dimensional model generating module based on the entrance and user interaction behaviors of the copy diagram.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A scale recognition method is characterized by comprising the following steps:
acquiring an image to be detected, wherein the image to be detected comprises a size marking graphic representation, and the size marking graphic representation comprises a digital graphic representation or a scale graphic representation;
determining a digital graphic representation and a scale graphic representation in the image to be detected by a network identification method by adopting a pre-trained scale area, and selecting a digital area and a scale area;
calculating a first overlapping degree of the digital area relative to the scale area and a second overlapping degree of the scale area relative to the digital area, and screening a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree as a size labeling pair;
performing text recognition on the digital area in the size marking center by adopting a text recognition network to obtain an actual size, and calculating the distance of the border of the scale area in the size marking center along the digital writing direction as the size on the graph;
and (4) determining an optimal unit scale representing the corresponding actual size of the unit pixel by integrating the actual sizes of all the size labeling pairs and the sizes on the graph, and determining a scale according to the optimal unit scale.
2. The scale recognition method of claim 1, wherein said integrating the actual dimensions of all the dimensioning pairs with the on-graph dimensions to determine a unit scale representing the corresponding actual dimensions of a unit pixel comprises:
and fitting the actual sizes of all size labeling pairs and the sizes on the graph by adopting a least square method, wherein the slope of a fitting straight line is an optimal unit scale.
3. The scale recognition method of claim 1, wherein said integrating the actual dimensions of all the dimensioning pairs with the on-graph dimensions to determine a unit scale representing the corresponding actual dimensions of a unit pixel comprises:
calculating a unit scale belonging to each size labeling pair according to the actual size of each size labeling pair and the size on the graph;
and (4) counting the corresponding unit scales of all the size labels, and taking the unit scale corresponding to the median as the optimal unit scale.
4. The scale recognition method of claim 1, wherein the screening a set of the number area and the scale area having the largest sum of the first degree of overlap and the second degree of overlap as the pair of the size labels comprises:
firstly, screening the first overlapping degree and the second overlapping degree by using an overlapping degree threshold value, and reserving the first overlapping degree and the second overlapping degree which are greater than the overlapping degree threshold value;
then, a group of digital areas and scale areas with the largest sum of the first overlapping degree and the second overlapping degree are screened to be size marking pairs.
5. A three-dimensional model building system based on a copy picture is characterized by comprising an image uploading module, a scale determining module, a wall door and window generating module and a three-dimensional model generating module;
the image uploading module is used for receiving the copying image uploaded by the user, storing the copying image in a cloud and displaying the image;
the scale determining module is used for carrying out scale recognition on the copy picture, and comprises the steps of carrying out automatic recognition by adopting the scale recognition method disclosed by claims 1-4 and setting a scale by a user to determine a final scale;
the wall door and window generation module is used for automatically generating walls and doors and windows or manually drawing the walls and the doors and windows according to the determined final scale so as to determine the final walls and the doors and windows;
and the three-dimensional model generation module is used for generating a three-dimensional model according to the determined final wall body and the doors and windows.
6. The copy diagram-based three-dimensional model building system according to claim 5, wherein the scale determining module includes a scale automatic generation module, a scale confirmation module, a scale editing module, and a scale recording module;
the automatic scale generation module is arranged at the cloud end and is used for carrying out scale identification on the copy image by adopting the scale identification method shown in claims 1-4 when a user selects automatic scale identification, and outputting an identification scale to the scale recording module;
the scale confirmation module is arranged at the client and used for confirming the satisfaction degree of the received identification scale, and when the identification scale is satisfied, the confirmed identification scale is output to the scale recording module after the identification scale is confirmed;
the scale editing module is arranged at the client and used for receiving the editing scale edited by the user and outputting the editing scale to the scale recording module when the user selects the scale to manually input; when the identified scale is not satisfactory, the scale modification module is used for receiving a modified scale edited by a user and outputting the modified scale to the scale recording module;
the scale recording module is arranged at the cloud end and used for recording a final scale, and the final scale comprises an identification scale, an editing scale or a modification scale.
7. The copy map-based three-dimensional model building system of claim 5, wherein the wall door/window generation module comprises an automatic drawing module, a manual drawing module, and a wall recording module,
the automatic drawing module is arranged at the cloud end and used for automatically drawing the wall body and the doors and windows according to the final scale and the images when the user selects automatic drawing, and sending the automatically drawn wall body and the doors and windows to the wall body recording module;
the manual drawing module is arranged at the client and used for manually drawing the wall body, the door and the window according to the final scale and the image when the user selects manual drawing, and sending the manually drawn wall body, the door and the window to the wall body recording module;
the wall body record module is arranged at the high in the clouds for record final wall body and door and window drawing results, final wall body and door and window drawing results include manual wall body and door and window drawing results and automatic wall body and door and window drawing results.
8. The copy diagram-based three-dimensional model building system according to claim 5, wherein the three-dimensional model generation module includes an automatic generation module, a model confirmation module, and a model editing module,
the automatic generation module is arranged at the cloud end and used for generating a three-dimensional model according to the final wall body and door and window drawing result and transmitting the three-dimensional model to the model confirmation module;
the model confirmation module is arranged at the client and used for confirming the satisfaction of the received three-dimensional model and outputting the three-dimensional module when the satisfaction is met;
the model editing module is arranged at the client, redraws and modifies the wall and/or the door and window in the three-dimensional model when the user is not satisfied with the three-dimensional model, and transmits the modification result to the automatic generation module or the wall recording module.
9. The copy map-based three-dimensional model building system according to claim 5, wherein when the three-dimensional model is not satisfied by the user, the two-dimensional map corresponding to the three-dimensional model is provided to the manual drawing module of the wall door and window generation module for re-modifying the wall window.
CN202010468928.8A 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy Active CN113742810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010468928.8A CN113742810B (en) 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010468928.8A CN113742810B (en) 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy

Publications (2)

Publication Number Publication Date
CN113742810A true CN113742810A (en) 2021-12-03
CN113742810B CN113742810B (en) 2023-08-15

Family

ID=78724162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010468928.8A Active CN113742810B (en) 2020-05-28 2020-05-28 Scale identification method and three-dimensional model building system based on copy

Country Status (1)

Country Link
CN (1) CN113742810B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742881A (en) * 2022-05-16 2022-07-12 佛山欧神诺云商科技有限公司 2D house type graph actual proportion calculation method, device, system and storage medium
CN114882306A (en) * 2022-04-06 2022-08-09 国家基础地理信息中心 Topographic map scale identification method and device, storage medium and electronic equipment
CN115238368A (en) * 2022-09-21 2022-10-25 中南大学 Pier drawing identification automatic modeling method and medium based on computer vision
CN116343253A (en) * 2023-03-13 2023-06-27 苏州威视通智能科技有限公司 CAD drawing length unit and pixel value proportion identification, acquisition and calculation method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0868645A (en) * 1994-08-29 1996-03-12 Nissan Motor Co Ltd Navigation system
US5553209A (en) * 1994-01-28 1996-09-03 Hughes Aircraft Company Method for automatically displaying map symbols
CN102013205A (en) * 2010-11-30 2011-04-13 百度在线网络技术(北京)有限公司 Electronic map marker rendering method and device
CN107958064A (en) * 2017-12-04 2018-04-24 携程旅游网络技术(上海)有限公司 The method, apparatus of map displaying Flight Information, electronic equipment, storage medium
CN108763606A (en) * 2018-03-12 2018-11-06 江苏艾佳家居用品有限公司 A kind of floor plan element extraction method and system based on machine vision
CN108804815A (en) * 2018-06-08 2018-11-13 杭州群核信息技术有限公司 A kind of method and apparatus assisting in identifying wall in CAD based on deep learning
CN109145171A (en) * 2018-07-23 2019-01-04 广州市城市规划勘测设计研究院 A kind of multiple dimensioned map data updating method
CN110032938A (en) * 2019-03-12 2019-07-19 北京汉王数字科技有限公司 A kind of Tibetan language recognition method, device and electronic equipment
US10445569B1 (en) * 2016-08-30 2019-10-15 A9.Com, Inc. Combination of heterogeneous recognizer for image-based character recognition
CN110414477A (en) * 2019-08-06 2019-11-05 广东三维家信息科技有限公司 Image scale detection method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553209A (en) * 1994-01-28 1996-09-03 Hughes Aircraft Company Method for automatically displaying map symbols
JPH0868645A (en) * 1994-08-29 1996-03-12 Nissan Motor Co Ltd Navigation system
CN102013205A (en) * 2010-11-30 2011-04-13 百度在线网络技术(北京)有限公司 Electronic map marker rendering method and device
US10445569B1 (en) * 2016-08-30 2019-10-15 A9.Com, Inc. Combination of heterogeneous recognizer for image-based character recognition
CN107958064A (en) * 2017-12-04 2018-04-24 携程旅游网络技术(上海)有限公司 The method, apparatus of map displaying Flight Information, electronic equipment, storage medium
CN108763606A (en) * 2018-03-12 2018-11-06 江苏艾佳家居用品有限公司 A kind of floor plan element extraction method and system based on machine vision
CN108804815A (en) * 2018-06-08 2018-11-13 杭州群核信息技术有限公司 A kind of method and apparatus assisting in identifying wall in CAD based on deep learning
CN109145171A (en) * 2018-07-23 2019-01-04 广州市城市规划勘测设计研究院 A kind of multiple dimensioned map data updating method
CN110032938A (en) * 2019-03-12 2019-07-19 北京汉王数字科技有限公司 A kind of Tibetan language recognition method, device and electronic equipment
CN110414477A (en) * 2019-08-06 2019-11-05 广东三维家信息科技有限公司 Image scale detection method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882306A (en) * 2022-04-06 2022-08-09 国家基础地理信息中心 Topographic map scale identification method and device, storage medium and electronic equipment
CN114882306B (en) * 2022-04-06 2023-08-18 国家基础地理信息中心 Topography scale identification method and device, storage medium and electronic equipment
CN114742881A (en) * 2022-05-16 2022-07-12 佛山欧神诺云商科技有限公司 2D house type graph actual proportion calculation method, device, system and storage medium
CN115238368A (en) * 2022-09-21 2022-10-25 中南大学 Pier drawing identification automatic modeling method and medium based on computer vision
CN116343253A (en) * 2023-03-13 2023-06-27 苏州威视通智能科技有限公司 CAD drawing length unit and pixel value proportion identification, acquisition and calculation method

Also Published As

Publication number Publication date
CN113742810B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN113742810B (en) Scale identification method and three-dimensional model building system based on copy
Chang et al. Matterport3d: Learning from rgb-d data in indoor environments
Macher et al. From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings
CN111091538B (en) Automatic identification and defect detection method and device for pipeline welding seams
Ham et al. An automated vision-based method for rapid 3D energy performance modeling of existing buildings using thermal and digital imagery
CA3157926A1 (en) Systems and methods for building a virtual representation of a location
Rashidi et al. Generating absolute-scale point cloud data of built infrastructure scenes using a monocular camera setting
Fichtner et al. Semantic enrichment of octree structured point clouds for multi‐story 3D pathfinding
US11989848B2 (en) Browser optimized interactive electronic model based determination of attributes of a structure
JP6781432B2 (en) Radio wave propagation simulation model creation method, creation system, creation device and creation program
CN102439605A (en) Apparatus and method for identifying creator of work of art
WO2022247823A1 (en) Image detection method, and device and storage medium
US20230035477A1 (en) Method and device for depth map completion
Tarsha Kurdi et al. Comparison of LiDAR building point cloud with reference model for deep comprehension of cloud structure
CN113744350B (en) Cabinet structure identification method, device, equipment and medium based on single image
CN117132564A (en) YOLOv 3-based sapphire substrate surface defect detection method and system
CN114332741B (en) Video detection method and system for building digital twins
Bacharidis et al. Fusing georeferenced and stereoscopic image data for 3D building Facade reconstruction
US20230290090A1 (en) Searchable object location information
CN114114457B (en) Fracture characterization method, device and equipment based on multi-modal logging data
Obrock et al. First steps to automated interior reconstruction from semantically enriched point clouds and imagery
JP3645404B2 (en) Method and apparatus for recognizing door from architectural drawing
CN111914717B (en) Data entry method and device based on meter reading data intelligent identification
JP3679241B2 (en) Construction drawing recognition method and recognition apparatus
JP3961595B2 (en) Architectural drawing recognition method and recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant