CN113781507A - Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium - Google Patents

Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium Download PDF

Info

Publication number
CN113781507A
CN113781507A CN202111040318.9A CN202111040318A CN113781507A CN 113781507 A CN113781507 A CN 113781507A CN 202111040318 A CN202111040318 A CN 202111040318A CN 113781507 A CN113781507 A CN 113781507A
Authority
CN
China
Prior art keywords
graph
target
line segment
line segments
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111040318.9A
Other languages
Chinese (zh)
Other versions
CN113781507B (en
Inventor
吴宏和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruijie Networks Co Ltd filed Critical Ruijie Networks Co Ltd
Priority to CN202111040318.9A priority Critical patent/CN113781507B/en
Publication of CN113781507A publication Critical patent/CN113781507A/en
Application granted granted Critical
Publication of CN113781507B publication Critical patent/CN113781507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a graph reconstruction method, a graph reconstruction device, a computing device and a computer storage medium, wherein a target area where a target graph is located is extracted from an original image; performing line segment detection in the target area, and determining an edge line segment and an inner line segment which belong to the target graph; splicing the edge line segments and the internal line segments to generate a graph to be identified; determining the graph category of the graph to be recognized; and reconstructing the target graph based on the graph to be recognized and the graph category. According to the embodiment of the application, the target graph reconstruction precision is improved.

Description

Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a graph reconstruction method, a graph reconstruction device, a computing device and a computer storage medium.
Background
The geometric figure reconstruction technology is applied to multiple fields in life, such as teaching fields, and teachers need to reconstruct geometric figures on test paper into teaching tools by means of the geometric figure reconstruction technology.
However, the current geometric reconstruction technology can only reconstruct the graph of the same type as the geometric figure according to the figure type after identifying the figure type of the geometric figure, and cannot reconstruct the specific shape of the geometric figure (for example, the problem that the angles of the geometric figures are different may be caused, for example, the figure type of a right triangle is identified as a triangle, and the reconstructed figure is an isosceles triangle, although the figure type of the isosceles triangle is also a triangle, the angles of the triangle and the triangle are different).
That is, the following defects mainly exist in the existing geometric figure recognition and reconstruction technology: the class of geometry in the image and the specific shape of the reconstructed geometry cannot be detected simultaneously.
Disclosure of Invention
The embodiment of the application provides a graph reconstruction method, a graph reconstruction device, a computing device and a computer storage medium, which can not only identify the graph type of a target graph, but also reconstruct the specific shape of the target graph, and improve the reconstruction precision of the target graph.
In a first aspect, an embodiment of the present application provides a graph reconstruction method, including:
extracting a target area where a target graph is located from an original image;
performing line segment detection in the target area, and determining an edge line segment and an inner line segment which belong to the target graph;
splicing the edge line segments and the internal line segments to generate a graph to be identified;
determining the graph category of the graph to be recognized;
and reconstructing the target graph based on the graph to be recognized and the graph category.
Optionally, the extracting, from the original image, a target region where the target graph is located includes:
determining a region containing a target graph selected from an original image by a user as a target region; or,
and fitting a plurality of candidate regions in the target image through an active contour algorithm, and taking the candidate region with the largest area as the target region.
Optionally, performing line segment detection in the target area, and determining an edge line segment and an inner line segment belonging to the target graph, includes:
performing line segment detection in the target area to obtain a plurality of candidate line segments;
calculating the intersection area formed after the extension line of each candidate line segment is intersected with the target area;
and if the intersection area is smaller than a preset area value, determining the candidate line segment as an edge line segment, and determining the remaining candidate line segments as internal line segments.
Optionally, the generating a graph to be recognized by splicing the edge line segments and the inner line segments includes:
determining the intersection point between every two edge line segments by extending the edge line segments, connecting the end point of each edge line segment with the nearest intersection point in sequence, and taking a plurality of intersection points generated among a plurality of edge line segments as the vertexes of the edge polygon to generate the edge polygon;
and connecting the internal line segment with the edge polygon to generate a graph to be identified.
Optionally, the connecting the inner line segment with the edge polygon to generate a graph to be recognized includes:
sequentially connecting the endpoint closest to the vertex of the edge polygon in each internal line segment with the vertex closest to the edge polygon;
determining an intersection point between every two internal line segments by prolonging the internal line segments, and clustering the determined multiple intersection points to determine a convergence point of the multiple internal line segments;
and sequentially connecting the other end point of each internal line segment with the corresponding convergence point to generate the graph to be identified.
Optionally, the determining the graphic category of the graphic to be recognized includes:
determining a topological structure of the graph to be identified, wherein the topological structure comprises the number of vertexes contained in the graph to be identified, the number of edges of each vertex and the adjacent relation between the vertexes;
matching the topological structure of the graph to be identified with the topological structures of a plurality of geometric graphs in a pre-established graph topological structure library, and determining the graph type of the geometric graph as the graph type of the graph to be identified if the topological structure of the graph to be identified is the same as the topological structure of the geometric graph.
Optionally, before the matching the topological structure corresponding to the to-be-identified graph with the topological structures of a plurality of geometric graphs in a pre-established graph topological structure library and determining the graph category of the to-be-identified graph, the method further includes:
and determining the topological structure and the graph type corresponding to each geometric figure, and establishing a graph topological structure library based on the plurality of geometric figures and the corresponding relation between the graph type and the topological structure corresponding to each geometric figure.
Optionally, the reconstructing the target graph based on the graph to be recognized and the graph category includes:
if the graph category of the graph to be identified comprises a two-dimensional geometric graph, reconstructing the target graph according to the obtained relative length of each side of the graph to be identified; or,
and if the graph type of the graph to be identified comprises a three-dimensional geometric graph, taking the geometric graph corresponding to the graph type as a target graph.
Optionally, the method further comprises:
and identifying marking information near each vertex of the target graph, and determining the relative position relation between the marking information and each vertex.
Optionally, after reconstructing the target graph based on the graph to be recognized and the graph category, the method further includes:
and adding corresponding labeling information at each vertex of the reconstructed target graph based on the relative position relation between the labeling information of the target graph before reconstruction and each vertex.
In a second aspect, an embodiment of the present application provides a graphics reconstruction apparatus, including:
the extraction module is used for extracting a target area where a target graph is located from an original image;
the determining module is used for detecting line segments in the target area and determining edge line segments and internal line segments belonging to the target graph;
the splicing module is used for splicing the edge line segments and the internal line segments to generate a graph to be identified;
the determining module is further used for determining the graph category of the graph to be identified;
and the establishing module is used for reconstructing the target graph based on the graph to be recognized and the graph category.
In a third aspect, a computing device is provided in an embodiment of the present application, comprising a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions are operable to be invoked for execution by the processing component to implement the graphics reconstruction method as described above in relation to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer storage medium storing a computer program, where the computer program is executed by a computer to implement the graph reconstructing method according to the first aspect.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In the embodiment of the application, a target area where a target graph is located is extracted from an original image; performing line segment detection in the target area, and determining an edge line segment and an inner line segment which belong to the target graph; splicing the edge line segments and the internal line segments to generate a graph to be identified; determining the graph category of the graph to be recognized; and reconstructing the target graph based on the graph to be recognized and the graph type, so that the graph type of the target graph can be recognized, the specific shape of the target graph can be reconstructed, and the reconstruction precision of the target graph is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1a and fig. 1b are schematic diagrams illustrating a graph reconstruction effect according to an embodiment of the present application;
fig. 2 is a flowchart of an embodiment of a graph reconstruction method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of extracting a target region according to an embodiment of the present disclosure;
fig. 4 is a flowchart of stitching a pattern to be recognized according to an embodiment of the present disclosure;
fig. 5 is a flowchart of another embodiment of a graph reconstruction method according to an embodiment of the present application;
fig. 6 is a schematic diagram of extracting a target region according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of another extraction target area provided in the embodiment of the present application;
FIG. 8 is a diagram illustrating an embodiment of obtaining a plurality of candidate segments;
FIG. 9 is a schematic diagram of a calculation of intersection area according to an embodiment of the present application;
FIG. 10 is a schematic diagram of another calculation of intersection area provided by embodiments of the present application;
FIG. 11 is a schematic diagram of an embodiment of the present application for generating edge polygons;
fig. 12 is a schematic diagram of stitching a pattern to be recognized according to an embodiment of the present application;
fig. 13 is a schematic diagram of another splicing of patterns to be recognized according to an embodiment of the present application;
fig. 14 is a schematic diagram illustrating adding the annotation information according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a graph reconstruction apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Before describing the graph reconstruction method of the present application, the current graph reconstruction technology and the existing defects are briefly described:
the existing image reconstruction technology generally adopts a mode of carrying out edge detection on an original image and then carrying out polygon approximation on an extracted outline so as to determine a target image; or a geometric figure classification mode based on deep learning is adopted to determine the figure class of the target figure. However, in the first method, it is not possible to identify whether the target graphic in the original image is a planar graphic or a stereoscopic graphic, and it is not possible to process interference of labeling information of a dotted line, a letter, and the like in the original image, which easily causes a problem that the dotted line and the letter are identified as a line segment, so that detection of the contour where the target graphic is located is erroneous. In the second method, the specific shape of the target pattern cannot be reconstructed. For example, the target pattern is identified as a trapezoid, but the degrees of each angle of the trapezoid cannot be reconstructed. That is, current graphics reconstruction techniques do not achieve two goals simultaneously or well: detecting the graph type of the target graph in the original image and reconstructing the specific shape of the target graph.
The method and the device can solve the problem of identification and reconstruction of the target graph (which can comprise a plane graph or a three-dimensional graph) in the original image, and not only can detect the graph type of the target graph in the original image, but also can reconstruct the specific shape of the target graph.
The graph reconstruction technology can be applied to various fields (such as teaching fields and the like), and convenience and accuracy of reconstructing the target graph can be improved through the graph reconstruction technology.
For example, as shown in fig. 1a and 1b, fig. 1a is an original image, the original image 1 includes a target graph (pyramid), and fig. 1b is an output result of the graph reconstruction method according to the present application. By the graph reconstruction method provided by the application, the target graph in the original image 1 can be reconstructed to obtain the target graph shown in fig. 1b, and the target graph of fig. 1b is output.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 2 is a flowchart of an embodiment of a graph reconstruction method according to an embodiment of the present application, and as shown in fig. 2, the method includes:
101. and extracting a target area where the target graph is located from the original image.
In this step, the original image is an image containing the target figure. For example, if a target figure to be reconstructed is included in the test paper, the page of the test paper is the original image, and the region defined by the target figure is the target region.
In the embodiment of the present application, as shown in fig. 3, a certain test paper 1 includes a target graphic (pyramid) to be reconstructed and labeling information 3 related to the target graphic (pyramid), where the labeling information B includes a letter J (not shown in the letter O, P, H) and characters "fig. 1" and "… …" in fig. 1, in this embodiment, a test paper page 1 is an original image, the target graphic is defined in a custom manner to obtain an area 2, and the area 2 is used as a target area. The target area can be understood as an area comprising the target image, and the line segment of the target image can be determined conveniently in the subsequent step by extracting the target area.
In practical application, the target area may be determined in a manner of manual delineation by a user, or in a manner of recognition by an active contour recognition algorithm (such as a geodesic active contour algorithm), which is not limited in the embodiment of the present application.
102. And performing line segment detection in the target area, and determining an edge line segment and an inner line segment which belong to the target graph.
In this step, a line segment detection algorithm may be adopted to perform line segment detection in the target area, and after determining a line segment belonging to the target graph, the line segment is divided into an edge line segment and an inner line segment.
In the embodiment of the application, specifically, when determining an edge line segment and an internal line segment belonging to the target graph, the line segment belonging to the target graph is obtained first, and then the edge line segment and the internal line segment are divided from the line segment, where the line segment includes a straight line segment and a curve segment, and during the identification, the identification may be performed according to detection algorithms corresponding to different line segments, for example, in the target area, a straight line segment candidate is detected by a straight line detection algorithm (e.g., a probabilistic hough straight line detection algorithm), and a curve segment candidate is detected by a curve detection algorithm (e.g., a curve detection algorithm based on RGB images or 3-dimensional point cloud data). The dividing mode of the edge line segment considers that the edge line segment is usually the line segment closest to the edge of the target area, so that whether a certain line segment is the edge line segment can be judged by calculating the intersection area of the edge line segment and the target area; since the inner line segment is usually a line segment that is farther from the edge of the target region, the remaining line segments can be determined as inner line segments after the edge line segment is determined.
In the embodiment of the application, through the step 102, only the line segments belonging to the target graph in the target area can be identified and detected, and other information (such as letters and characters) in the target area is prevented from being detected and identified, so that the accuracy of the subsequently generated graph to be identified is ensured.
103. And splicing the edge line segments and the internal line segments to generate a graph to be identified.
In this step, the figure to be recognized is actually a target figure including only line segments (i.e. not including the labeling information of characters, letters, etc. in the target figure). In order to distinguish from the target figure (including the figure and the labeling information of the characters, letters and the like in the target figure), and further identify the figure class of the target figure, the target figure only including line segments is called the figure to be identified.
In the embodiment of the application, as shown in the schematic diagram of fig. 4, the line segments identified by the target graph are in an unconnected state, and a complete graph cannot be presented (the graph is a fully closed graph in general), so that the edge line segments and the inner line segments need to be connected and spliced to form the graph a to be identified. For the line segment connection, see the specific implementation process of step 203-step 205 in the following embodiments.
104. And determining the graph category of the graph to be recognized.
In this step, the graphic category may include two-dimensional geometric figures or three-dimensional geometric figures, wherein the two-dimensional geometric figures (i.e., planar figures) may include triangles, squares, rectangles, or the like. Three-dimensional geometric figures (i.e., solid figures) may include pyramids, triangular prisms, cuboids, and the like.
In the embodiment of the present application, there are various ways to determine the type of the graph, for example, a way to establish a graph topology library, where the graph topology library includes various correspondence relationships between "geometry-graph type-topology structure", so that when a graph to be identified is obtained, a topology structure of a geometry that is the same as the topology structure of the graph to be identified can be queried in the graph topology library based on the topology structure of the graph to be identified, and the graph type corresponding to the geometry is determined as the graph type of the graph to be identified. Or, through a deep learning manner, after a plurality of geometric figures and figure categories corresponding to the geometric figures are learned, an identification model is generated, and when the identification model can acquire a figure to be identified, the figure category of the figure to be identified is identified based on the trained identification model.
105. And reconstructing the target graph based on the graph to be recognized and the graph category.
In the embodiment of the application, the modes of reconstructing the graphs of different graph types are different. For example, for a pattern to be recognized of a two-dimensional geometric pattern, the target pattern may be reconstructed according to the relative length of each side of the pattern to be recognized. That is, the reconstructed target pattern may be output after being enlarged or reduced in an equal ratio according to the relative length of each side. Because the relative length of each side of the graph to be recognized is based on reconstruction, the included angle degree of each side in the target graph cannot be influenced, and the accuracy of the output target graph is ensured.
In practical application, a calculation engine is established on the basis of the graph reconstruction method, an original image containing a target graph is input into the calculation engine, the calculation engine carries out processing and recognition by using algorithms (such as a detection line segment algorithm and a graph class recognition algorithm), the name of the target graph contained in the original image and the specific shape of the target graph are output, and therefore teachers in the teaching field can display and interact the output target graph in a teaching tool in a 2D/3D mode, and interactive teaching quality is improved.
Fig. 5 is a flowchart of another embodiment of a graph reconstruction method provided in an embodiment of the present application, and as shown in fig. 5, the method includes:
201. and extracting a target area where the target graph is located from the original image.
In this embodiment of the present application, as an optional scheme, step 201 may include: selecting a region containing a target graph from an original image of a user to be determined as a target region; or fitting a plurality of candidate regions in the original image through an active contour algorithm, and taking the candidate region with the largest area as a target region.
For example, as shown in fig. 6, the user defines a target area 2 from the original image 1, and the target area 2 includes a target graphic therein.
For example, as shown in fig. 7, a calculation engine built based on the graph reconstruction method of the present application can fit a plurality of candidate regions (e.g., candidate regions include candidate region 2, candidate region 3, and candidate region 4) by using an active contour algorithm, and select candidate region 2 with the largest area as the target region.
In addition, the target region where the target graphic is located may be extracted from the original image by other methods, which is not limited in this embodiment of the application.
It should be further noted that, as shown in fig. 6, because the labeling information around the target graphic may be also defined in a manner of manual definition by the user, which not only easily causes the problem that the subsequent identification of the line segment belonging to the target graphic is not accurate, but also increases the specification of identifying the line segment, in the embodiment of the present application, the target area may be determined by default in a manner of identification by using a contour identification algorithm, so that the accuracy of subsequent identification of the line segment belonging to the target graphic can be improved.
202. And performing line segment detection in the target area to acquire a plurality of candidate line segments.
In the embodiment of the present application, for example, as shown in fig. 8, for a target graph including straight lines, a probability hough straight line detection algorithm may be used in a target region to detect straight line segment candidates (a to f). For a target graph including a curve, such as a sector, an ellipse, a circle, and the like, a candidate curve segment can be detected through a curve detection algorithm, and since the purpose is to detect a line segment, the description of the present application is omitted.
In the embodiment of the application, the purpose of detecting the candidate line segments is to avoid identifying the identification information such as the dotted line, the letter and the like as the line segments, so that the problem of poor accuracy effect of the generated graph to be identified is solved.
203. And determining an edge line segment and an inner line segment of the target graph from the plurality of candidate line segments.
In this embodiment of the present application, as an optional scheme, step 203 may include:
2031. and calculating the intersection area formed after the extension line of each candidate line segment is intersected with the target area.
In this step, as shown in fig. 9, taking the candidate line segment a as an example, an intersection area S1 (short vertical line area portion in fig. 9) formed after the straight line (extension line) where the candidate line segment is located intersects the target area is determined.
2032. And if the intersection area is smaller than a preset area value, determining the candidate line segment as an edge line segment, and determining the remaining candidate line segments as internal line segments.
In this step, it is determined whether the intersection area is smaller than a preset area value, and if so, the candidate line segment is determined as an edge line segment, and the remaining candidate line segments are determined as inner line segments. The preset area value may be set according to a requirement, for example, taking a pixel unit as an example, when a pixel occupied by the target region is 1000px, the preset area value may be set to be 100px, and if the pixel occupied by the intersection area S is less than 100px, the candidate line segment is an edge line segment, and the remaining candidate line segments are determined as inner line segments.
And verifying the inner line segment d: as shown in fig. 10, an intersection area S2 (vertical line area part of fig. 10) formed by the intersection of the straight line (extension line) of the inner line segment d with the target area is one-half of the target area 2, and therefore, the condition that the intersection area is smaller than the predetermined area value is not met, and thus the line segment d is an inner line segment.
204. The intersection point between every two edge line segments is determined by extending the edge line segments, the end point of each edge line segment is connected with the nearest intersection point in sequence, and a plurality of intersection points generated among a plurality of edge line segments are used as the vertexes of the edge polygon, so that the edge polygon is generated.
In this embodiment of the present application, as an optional scheme, step 204 may include: and determining the intersection point between every two edge line segments by extending the edge line segments, and sequentially connecting the end point of each edge line segment with the nearest intersection point to generate an edge polygon.
In this step, as shown in fig. 11, by extending the edge line segments a to c, an intersection Q2 between the edge line segment a and the edge line segment b, an intersection Q1 between the edge line segment a and the edge line segment c, and an intersection Q3 between the edge line segment b and the edge line segment c are determined. The edge polygon is generated by connecting the end point a1 of the edge line segment a and the end point b1 of the edge line segment b to the intersection point Q2, the end point a2 of the edge line segment a and the end point c2 of the edge line segment c to the intersection point Q1, and the end point b2 of the edge line segment b and the end point c1 of the edge line segment c to the intersection point Q3 in this order.
Further, before generating the edge polygon, a plurality of intersections generated between a plurality of edge line segments are also required to be used as vertices of the edge polygon.
For example, as shown in fig. 11, the intersections Q1, Q2, Q3 are set as vertices of the edge polygon Q.
205. And connecting the internal line segment with the edge polygon to generate a graph to be identified.
In this embodiment of the present application, as an optional scheme, step 205 may include:
2051. and sequentially connecting the endpoint closest to the vertex of the edge polygon in each internal line segment with the vertex closest to the edge polygon.
In this step, as shown in fig. 12, the end point d1 of the inner line segment d is closest to the vertex Q1 of the edge polygon (as shown in fig. 12, the edge polygon has 3 vertices, Q1, Q2, and Q3), the end point e1 of the inner line segment e is closest to the vertex Q2 of the edge polygon, and the end point f1 of the inner line segment f is closest to the vertex Q3 of the edge polygon, so that the end point d1 is connected to the vertex Q1, the end point e1 is connected to the vertex Q2, and the end point f1 is connected to the vertex Q3.
2052. And determining the intersection point between every two internal line segments by prolonging the internal line segments, and clustering the determined multiple intersection points to determine the convergence points of the multiple internal line segments.
In this step, as shown in fig. 12, by extending the internal segments d to f, an intersection (not shown in the figure) between the internal segment d and the internal segment e, an intersection (not shown in the figure) between the internal segment e and the internal segment f, and an intersection (not shown in the figure) between the internal segment f and the internal segment d are determined. In the application, it is considered that there may be a slight error difference point when identifying candidate segments, and the intersection points between two internal segments d-f may not be located at the same coordinate point, so that it is necessary to cluster a plurality of determined intersection points (the intersection point between the internal segment d and the internal segment e, the intersection point between the internal segment e and the internal segment f, and the intersection point between the internal segment f and the internal segment d) to determine the convergence point Q4 of the plurality of internal segments.
2053. And sequentially connecting the other end point of each internal line segment with the corresponding convergence point to generate the graph to be identified.
In this step, as shown in fig. 12, the other end point d2 of the inner line segment d is connected to the point of convergence Q4, the other end point e2 of the inner line segment e is connected to the point of convergence Q4, and the other end point f2 of the inner line segment f is connected to the point of convergence Q4 in sequence, and a pattern a to be recognized is generated according to the connection operations in steps 2051 to 2053, where Q1 to Q4 are 4 vertices of the pattern a to be recognized.
It should be noted that, by clustering the intersection points of the plurality of internal line segments, the accuracy of the graph to be recognized can be improved.
206. And determining the topological structure of the graph to be identified, wherein the topological structure comprises the number of vertexes contained in the graph to be identified, the number of edges of each vertex and the adjacent relation among the vertexes.
In this step, taking the figure to be recognized as the pyramid shown in fig. 12 as an example, the topological structure of the pyramid includes: the number of vertices was 4(Q1, Q2, Q3, Q4), the number of sides of Q1, Q2, Q3, Q4 was all 3, and Q1, Q2, Q3, Q4 were all in an adjacent relationship.
207. Matching the topological structure of the graph to be identified with the topological structures of a plurality of geometric graphs in a pre-established graph topological structure library, and determining the graph type of the geometric graph as the graph type of the graph to be identified if the topological structure of the graph to be identified is the same as the topological structure of the geometric graph.
In this step, before step 207 is executed, it is necessary to determine a topology and a graph category corresponding to each geometric figure, and establish a graph topology library based on a plurality of geometric figures and a correspondence between the graph category and the topology corresponding to each geometric figure. That is, the graphic topology structure library includes a plurality of corresponding relations, and one corresponding relation includes a geometric figure, a graphic type and a topology structure corresponding to the geometric figure. By establishing the graph topological structure library, the graph type of the graph to be identified can be quickly identified, and the identification efficiency of the graph type is improved.
In the embodiment of the application, the graph topological structure library comprises various corresponding relations of geometry-graph category-topological structure, so that when the graph to be identified is obtained, the topological structure of the geometry which is the same as the topological structure of the graph to be identified can be inquired in the graph topological structure library based on the topological structure of the graph to be identified, and the graph category corresponding to the geometry is determined as the graph category of the graph to be identified.
Furthermore, as a priority scheme, geometric figures with the same number as the top points of the figures to be identified can be screened from a figure topological structure library, and then the topological structures of the figures to be identified are matched with the topological structures of the screened geometric figures, so that the matching scale can be reduced, the matching speed is greatly increased, and the identification efficiency of the figure categories is further improved.
It should be noted that, the establishment of the graph topology structure library has the function of improving the recognition efficiency of the graph category, and also has the characteristic of good expansibility, specifically, when a new target graph needs to be recognized, only the corresponding topology structure needs to be added in the graph topology library, and the recognition process of the whole target graph does not need to be changed. Compared with the current deep learning algorithm, a large number of training workflows are reduced.
208. And reconstructing the target graph based on the graph to be recognized and the graph category.
In this embodiment of the present application, as an optional scheme, step 208 may include: if the graph category of the graph to be identified comprises a two-dimensional geometric graph, reconstructing the target graph according to the obtained relative length of each side of the graph to be identified; or if the figure type of the figure to be recognized comprises a three-dimensional geometric figure, taking the geometric figure corresponding to the figure type as a target figure.
Specifically, for a two-dimensional geometric figure, the target figure can be reconstructed after the two-dimensional geometric figure is enlarged or reduced in equal proportion according to the relative length of each side of the obtained figure to be recognized. For a three-dimensional geometric figure, the length of a hidden edge (an edge on which a three-dimensional figure cannot be observed in a planar view) can be calculated in an original image, the length of the hidden edge can be enlarged or reduced in an equal proportion according to the relative length of each edge, the target figure is reconstructed, the length of the hidden edge cannot be calculated in the original image (the hidden edge can be understood as an edge on which the length of the edge cannot be obtained in a planar view, such as an irregular cube, and the length of all edges of the irregular cube cannot be observed and estimated in the planar view), a geometric figure with the same figure type as that of the target figure in a figure topological structure library can be used as the target figure, and the target figure is output.
For example, as shown in fig. 13, in the case of a cube, the length of a side a of the cube obtained in an original image (an image is usually displayed in a planar view) is 100 pixels, and the length of a hidden side e can be estimated to be 100 pixels due to the characteristics of the cube (corresponding sides are the same), so that the cube can be reconstructed by performing scaling up or down on the basis of the relative lengths of the sides.
Further, the method further comprises: and identifying marking information near each vertex of the target graph, and determining the relative position relation between the marking information and each vertex.
In this step, the label information may include text, letters, etc.
In the embodiment of the present application, the relative position relationship between the annotation information and each vertex can be determined by setting a coordinate positioning manner, for example, the relative position relationship between the letter O and the vertex Q1 refers to the relative position between the coordinate of the letter O and the coordinate of the vertex Q1. In addition, the present invention may be implemented in other ways, and the embodiments of the present application are not limited thereto.
Further, after the relative positional relationship between the labeling information and each vertex is determined, when the target graph is reconstructed, the corresponding labeling information can be added to each vertex of the target graph after reconstruction based on the relative positional relationship between the labeling information and each vertex of the target graph before reconstruction.
In the embodiment of the present application, as shown in fig. 14, at four vertexes (Q1 to Q4, Q1 to Q4 are not actually shown, but only the vertexes are named for convenience of description in the present application), the labeling information (the letter number of the vertex is added, specifically, the letter O is added to the vertex Q1, the letter P is added to the vertex Q2, the letter H is added to the vertex Q3, and the letter J is added to the vertex Q4) is added), and the labeling information is displayed so that the user can visually see the letter number corresponding to each vertex.
In practical applications, as shown in fig. 1a, the original graph includes the target graph and the label information (vertex letter number O, P, J, H) of the target graph, and the target graph can be reconstructed by performing the graph reconstruction method described above in the present application. However, in order to further improve the user experience, the labeling information in the original graph may be correspondingly added to the reconstructed target graph based on the relative position relationship between the labeling information of the target graph before reconstruction and each vertex, so as to generate an effect diagram like the graph on the right side in fig. 14.
Furthermore, after the label information is added at each vertex of the target graph, the target graph with the label information is output, and the physical examination of the user is improved. And when the labeled information can be used for subsequently rotating the target graph, the labeled information can move based on the position relation between the vertex and the labeled information, so that the accuracy of the target graph is ensured (the problem that the labeled information does not correspond to the target graph due to rotation is avoided).
Fig. 15 is a schematic structural diagram of a graph reconstruction apparatus provided in an embodiment of the present application, and as shown in fig. 14, the apparatus includes:
an extracting module 31, configured to extract a target area where a target graph is located from an original image by using the original image;
a determining module 32, configured to perform line segment detection in the target area, and determine an edge line segment and an inner line segment that belong to the target graph;
the splicing module 33 is configured to splice the edge line segments and the internal line segments to generate a graph to be identified;
the determining module 32 is further configured to determine a pattern category of the pattern to be recognized;
and the establishing module 34 is used for reconstructing the target graph based on the graph to be recognized and the graph category.
Optionally, in this embodiment of the present application, the extraction module 31 of the apparatus is specifically configured to determine, as a target area, an area that includes a target graphic and is selected from an original image of a user; or fitting a plurality of candidate regions in the original image through an active contour algorithm, and taking the candidate region with the largest area as a target region.
Optionally, in this embodiment of the present application, the determining module 32 of the apparatus is specifically configured to perform line segment detection in the target area, and obtain a plurality of candidate line segments; and determining an edge line segment and an inner line segment of the target graph from the plurality of candidate line segments.
Optionally, in this embodiment of the present application, the determining module 32 of the apparatus is specifically configured to calculate an intersection area formed after the extension line of each candidate line segment intersects with the target area; and if the intersection area is smaller than a preset area value, determining the candidate line segment as an edge line segment, and determining the remaining candidate line segments as internal line segments.
Optionally, in this embodiment of the present application, the splicing module 33 of the apparatus is specifically configured to determine an intersection point between every two edge line segments by extending the edge line segments, sequentially connect an end point of each edge line segment with a nearest intersection point, and use a plurality of intersection points generated between a plurality of edge line segments as vertices of an edge polygon, so as to generate the edge polygon; and connecting the internal line segment with the edge polygon to generate a graph to be identified.
Optionally, in this embodiment of the present application, the splicing module 33 of the apparatus is specifically configured to sequentially connect an endpoint closest to a vertex of the edge polygon in each internal line segment to the vertex closest to the edge polygon; determining an intersection point between every two internal line segments by prolonging the internal line segments, and clustering the determined multiple intersection points to determine a convergence point of the multiple internal line segments; and sequentially connecting the other end point of each internal line segment with the corresponding convergence point to generate the graph to be identified.
Optionally, in this embodiment of the present application, the determining module 32 of the apparatus is specifically configured to determine a topological structure of the graph to be recognized, where the topological structure includes the number of vertices included in the graph to be recognized, the number of edges of each vertex, and an adjacent relationship between the vertices; matching the topological structure of the graph to be identified with the topological structures of a plurality of geometric graphs in a pre-established graph topological structure library, and determining the graph type of the geometric graph as the graph type of the graph to be identified if the topological structure of the graph to be identified is the same as the topological structure of the geometric graph.
Optionally, in this embodiment of the present application, the establishing module 34 of the apparatus is specifically configured to determine a topology and a graph category corresponding to each geometric figure, and establish a graph topology library based on a plurality of geometric figures and a correspondence between the graph category and the topology corresponding to each geometric figure.
Optionally, in this embodiment of the present application, the establishing module 34 of the apparatus is specifically configured to, if the graph category of the graph to be recognized includes a two-dimensional geometric graph, reconstruct the target graph according to the obtained relative length of each side of the graph to be recognized; or if the figure type of the figure to be recognized comprises a three-dimensional geometric figure, taking the geometric figure corresponding to the figure type as a target figure.
Optionally, in this embodiment of the application, the determining module 32 of the apparatus is specifically configured to identify labeling information near each vertex of the target graph, and determine a relative position relationship between the labeling information and each vertex.
Optionally, in this embodiment of the present application, the apparatus further includes an adding module 35.
The adding module 35 is configured to add corresponding labeling information at each vertex of the reconstructed target graph based on the relative position relationship between the labeling information of the target graph before reconstruction and each vertex.
The graph reconstructing apparatus shown in fig. 15 may perform the graph reconstructing method shown in the embodiment shown in fig. 5, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the graph reconstruction apparatus in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the image reconstruction apparatus of the embodiment shown in fig. 15 may be implemented as a computing device, and in practical applications, as shown in fig. 16, the computing device may include a storage component 401 and a processing component 402;
one or more computer instructions are stored in the storage component 401, wherein the one or more computer instructions are invoked by the processing component 402 to be executed to implement the graph reconstructing method according to the embodiment of fig. 2 or fig. 5.
Among other things, the processing component 402 may include one or more processors to execute computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the graphics reconstruction methods described in the embodiments of fig. 2 or 5 above.
The storage component 401 is configured to store various types of data to support operations at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, a computing device may also necessarily include other components, such as input/output interfaces, communication components, and so forth. The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc. The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the graph reconstruction method according to the embodiment shown in fig. 2 or fig. 5 may be implemented.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. A method of image reconstruction, comprising:
extracting a target area where a target graph is located from an original image;
performing line segment detection in the target area, and determining an edge line segment and an inner line segment which belong to the target graph;
splicing the edge line segments and the internal line segments to generate a graph to be identified;
determining the graph category of the graph to be recognized;
and reconstructing the target graph based on the graph to be recognized and the graph category.
2. The method according to claim 1, wherein the extracting the target region where the target graphic is located from the original image comprises:
determining a region containing a target graph selected from an original image by a user as a target region; or,
and fitting a plurality of candidate regions in the target image through an active contour algorithm, and taking the candidate region with the largest area as the target region.
3. The method of claim 1, wherein performing line segment detection in the target area, determining edge line segments and interior line segments belonging to the target graphic, comprises:
performing line segment detection in the target area to obtain a plurality of candidate line segments;
calculating the intersection area formed after the extension line of each candidate line segment is intersected with the target area;
and if the intersection area is smaller than a preset area value, determining the candidate line segment as an edge line segment, and determining the remaining candidate line segments as internal line segments.
4. The method according to claim 1, wherein the generating the graph to be recognized by stitching the edge line segments and the inner line segments comprises:
determining the intersection point between every two edge line segments by extending the edge line segments, connecting the end point of each edge line segment with the nearest intersection point in sequence, and taking a plurality of intersection points generated among a plurality of edge line segments as the vertexes of the edge polygon to generate the edge polygon;
and connecting the internal line segment with the edge polygon to generate a graph to be identified.
5. The method of claim 4, wherein the connecting the inner line segment with the edge polygon to generate a graph to be recognized comprises:
sequentially connecting the endpoint closest to the vertex of the edge polygon in each internal line segment with the vertex closest to the edge polygon;
determining an intersection point between every two internal line segments by prolonging the internal line segments, and clustering the determined multiple intersection points to determine a convergence point of the multiple internal line segments;
and sequentially connecting the other end point of each internal line segment with the corresponding convergence point to generate the graph to be identified.
6. The method according to claim 5, wherein the determining the pattern category of the pattern to be recognized comprises:
determining a topological structure of the graph to be identified, wherein the topological structure comprises the number of vertexes contained in the graph to be identified, the number of edges of each vertex and the adjacent relation between the vertexes;
matching the topological structure of the graph to be identified with the topological structures of a plurality of geometric graphs in a pre-established graph topological structure library, and determining the graph type of the geometric graph as the graph type of the graph to be identified if the topological structure of the graph to be identified is the same as the topological structure of the geometric graph.
7. The method according to claim 6, wherein before the step of matching the topology corresponding to the to-be-recognized graph with the topologies of a plurality of geometric graphs in a pre-established graph topology library to determine the graph type of the to-be-recognized graph, the method further comprises:
and determining the topological structure and the graph type corresponding to each geometric figure, and establishing a graph topological structure library based on the plurality of geometric figures and the corresponding relation between the graph type and the topological structure corresponding to each geometric figure.
8. The method according to claim 1, wherein the reconstructing the target pattern based on the pattern to be recognized and the pattern class comprises:
if the graph category of the graph to be identified comprises a two-dimensional geometric graph, reconstructing the target graph according to the obtained relative length of each side of the graph to be identified; or,
and if the graph type of the graph to be identified comprises a three-dimensional geometric graph, taking the geometric graph corresponding to the graph type as a target graph.
9. The method of claim 1, further comprising:
and identifying marking information near each vertex of the target graph, and determining the relative position relation between the marking information and each vertex.
10. The method according to claim 9, further comprising, after the reconstructing the target pattern based on the pattern to be recognized and the pattern class:
and adding corresponding labeling information at each vertex of the reconstructed target graph based on the relative position relation between the labeling information of the target graph before reconstruction and each vertex.
11. A picture reconstruction apparatus, comprising:
the extraction module is used for extracting a target area where a target graph is located from an original image;
the determining module is used for detecting line segments in the target area and determining edge line segments and internal line segments belonging to the target graph;
the splicing module is used for splicing the edge line segments and the internal line segments to generate a graph to be identified;
the determining module is further used for determining the graph category of the graph to be identified;
and the establishing module is used for reconstructing the target graph based on the graph to be recognized and the graph category.
12. A computing device comprising a processing component and a storage component; the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component to implement the graphics reconstruction method of any of claims 1-10.
13. A computer storage medium, characterized in that a computer program is stored, which, when executed by a computer, implements the graphics reconstruction method according to any one of claims 1 to 10.
CN202111040318.9A 2021-09-06 2021-09-06 Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium Active CN113781507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040318.9A CN113781507B (en) 2021-09-06 2021-09-06 Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040318.9A CN113781507B (en) 2021-09-06 2021-09-06 Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113781507A true CN113781507A (en) 2021-12-10
CN113781507B CN113781507B (en) 2023-03-21

Family

ID=78841283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040318.9A Active CN113781507B (en) 2021-09-06 2021-09-06 Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113781507B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034254A (en) * 2010-09-29 2011-04-27 广东威创视讯科技股份有限公司 Method for recognizing geometric figure
CN107767382A (en) * 2017-09-26 2018-03-06 武汉市国土资源和规划信息中心 The extraction method and system of static three-dimensional map contour of building line
CN110059760A (en) * 2019-04-25 2019-07-26 北京工业大学 Geometric figure recognition methods based on topological structure and CNN
US20200380760A1 (en) * 2019-05-30 2020-12-03 Tencent America LLC Method and apparatus for point cloud compression
US20200394763A1 (en) * 2013-03-13 2020-12-17 Kofax, Inc. Content-based object detection, 3d reconstruction, and data extraction from digital images
WO2021136878A1 (en) * 2020-01-02 2021-07-08 Nokia Technologies Oy A method, an apparatus and a computer program product for volumetric video encoding and decoding
CN113191272A (en) * 2021-04-30 2021-07-30 杭州品茗安控信息技术股份有限公司 Engineering image identification method, identification system and related device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034254A (en) * 2010-09-29 2011-04-27 广东威创视讯科技股份有限公司 Method for recognizing geometric figure
US20200394763A1 (en) * 2013-03-13 2020-12-17 Kofax, Inc. Content-based object detection, 3d reconstruction, and data extraction from digital images
CN107767382A (en) * 2017-09-26 2018-03-06 武汉市国土资源和规划信息中心 The extraction method and system of static three-dimensional map contour of building line
CN110059760A (en) * 2019-04-25 2019-07-26 北京工业大学 Geometric figure recognition methods based on topological structure and CNN
US20200380760A1 (en) * 2019-05-30 2020-12-03 Tencent America LLC Method and apparatus for point cloud compression
WO2021136878A1 (en) * 2020-01-02 2021-07-08 Nokia Technologies Oy A method, an apparatus and a computer program product for volumetric video encoding and decoding
CN113191272A (en) * 2021-04-30 2021-07-30 杭州品茗安控信息技术股份有限公司 Engineering image identification method, identification system and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
关静等: "基于角点检测的规则几何图形识别算法研究", 《现代信息科技》 *

Also Published As

Publication number Publication date
CN113781507B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
Qiu et al. Pipe-run extraction and reconstruction from point clouds
CN107767382A (en) The extraction method and system of static three-dimensional map contour of building line
Chen et al. Reconstructing compact building models from point clouds using deep implicit fields
US10445908B1 (en) Computer handling of polygons
US20090244082A1 (en) Methods and systems of comparing face models for recognition
Widyaningrum et al. Building outline extraction from ALS point clouds using medial axis transform descriptors
JP3078166B2 (en) Object recognition method
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN114972947B (en) Depth scene text detection method and device based on fuzzy semantic modeling
CN113705669A (en) Data matching method and device, electronic equipment and storage medium
Song et al. Unorganized point classification for robust NURBS surface reconstruction using a point-based neural network
Bergamasco et al. A graph-based technique for semi-supervised segmentation of 3D surfaces
CN114090809A (en) Visualization method and device for power transmission line, computer equipment and storage medium
Rasoulzadeh et al. Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches
Álvarez et al. Junction assisted 3d pose retrieval of untextured 3d models in monocular images
CN117593420A (en) Plane drawing labeling method, device, medium and equipment based on image processing
CN113781507B (en) Graph reconstruction method, graph reconstruction device, computing equipment and computer storage medium
Liu et al. PLDD: Point-lines distance distribution for detection of arbitrary triangles, regular polygons and circles
Bénière et al. Recovering primitives in 3D CAD meshes
Auer et al. Glyph-and Texture-based Visualization of Segmented Tensor Fields.
KR20230101469A (en) A method for learning a target object by detecting an edge from a digital model of the target object and setting sample points, and a method for augmenting a virtual model on a real object implementing the target object using the same
CN107464257A (en) Wide baseline matching process and device
Sintunata et al. Skewness map: estimating object orientation for high speed 3D object retrieval system
CN111783180A (en) Drawing splitting method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant