CN115661851A - Sample data acquisition and component identification method and electronic equipment - Google Patents

Sample data acquisition and component identification method and electronic equipment Download PDF

Info

Publication number
CN115661851A
CN115661851A CN202211160385.9A CN202211160385A CN115661851A CN 115661851 A CN115661851 A CN 115661851A CN 202211160385 A CN202211160385 A CN 202211160385A CN 115661851 A CN115661851 A CN 115661851A
Authority
CN
China
Prior art keywords
image
cutting
sub
component
cad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211160385.9A
Other languages
Chinese (zh)
Inventor
王海强
於其之
王晓威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanyi Technology Co Ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN202211160385.9A priority Critical patent/CN115661851A/en
Publication of CN115661851A publication Critical patent/CN115661851A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a method for acquiring sample data and identifying a component and an electronic device, wherein the method comprises the following steps: acquiring an original CAD image and an annotation CAD image, wherein the annotation CAD image comprises an annotation frame of each segment of a complete component, and the annotation frames of adjacent segments have an overlapping area; cutting the original CAD image to obtain a first sub-image set, and cutting the marked CAD image to obtain a second sub-image set; and taking the first sub-image set and the second sub-image set as training sample data, wherein the training sample data is used for training a preset neural network model to obtain a component recognition model, so that the component recognition model obtained by training with the training sample data can effectively recognize the large component, improve the recognition success rate of the large component, and reduce the probability of missing recognition and incomplete recognition.

Description

Sample data acquisition and component identification method and electronic equipment
Technical Field
The application relates to the technical field of graphic image processing, in particular to a method for acquiring sample data and identifying a component and an electronic device.
Background
At present, methods for identifying components in a CAD drawing are mainly classified into the following four types:
first, the direct use of a detection model, such as the fast-rcnn model, is simple and versatile;
secondly, identifying a component by using a detection model and a classification model, wherein the detection model is mainly a detection component, and the component recall rate is improved; the classification model is mainly used for identifying the detected component category;
thirdly, a vector image recognition component is adopted, but vector data needs to be acquired in advance, but the cost for accurately acquiring the vector data from a CAD drawing or an image is high;
fourthly, analyzing the CAD drawing to obtain component primitives, and then identifying the components from the component primitives by adopting a graph and image method.
Wherein the first class and the second class have limited ability to identify large building blocks due to the limitation of the size of the model input; the third type and the fourth type need to analyze CAD drawings, however, information of large components stored in the CAD drawings is not necessarily complete, and the large components are not necessarily drawn in one layer due to large span, so that the extraction of primitive data of the large components is difficult, and meanwhile, the characteristics of the large components are dispersed and difficult to identify according to the characteristics of the graphs.
In a CAD drawing, a large component has the characteristic of severe change of the length-width ratio, the side length of the large component is very long and can reach more than 6K pixels, and the current identification mode can cause that a large number of large components are missed to be identified or are incompletely identified.
Disclosure of Invention
The application provides a method for acquiring sample data and identifying a component and electronic equipment, which are used for solving the problem that identification of a large component is missed or incomplete.
In a first aspect, an embodiment of the present application provides a method for acquiring sample data, including:
acquiring an original CAD image and an annotation CAD image, wherein the annotation CAD image comprises an annotation frame of each segment of a complete component, and the annotation frames of adjacent segments have an overlapping area;
cutting the original CAD image to obtain a first sub-image set, and cutting the original CAD image to obtain a second sub-image set;
and taking the first sub-image set and the second sub-image set as training sample data, wherein the training sample data is used for training a preset neural network model to obtain a component recognition model.
Optionally, the cutting the original CAD image to obtain a first sub-image set, and cutting the labeled CAD image to obtain a second sub-image set, including:
carrying out multi-scale scaling on the original CAD image to obtain at least one scaled original CAD image, and carrying out image cutting on the at least one scaled original CAD image and the original CAD image to obtain the first sub-image set; the first sub-image set comprises a cutting set corresponding to each zoomed original CAD image and a cutting set corresponding to the original CAD image;
performing multi-scale scaling on the labeling CAD image to obtain at least one scaled labeling CAD image, and performing image cutting on the at least one scaled labeling CAD image to obtain a second sub-image set; the second sub-image set comprises a cutting set corresponding to each zoomed labeling CAD image and a cutting set corresponding to the representation CAD image;
wherein the scaling of the multi-scale scaling is greater than 0 and less than 1.
Optionally, the multi-scale scaling of the original CAD image comprises:
after the original CAD image is subjected to image expansion operation, the original CAD image subjected to image expansion operation is subjected to multi-scale scaling by adopting a bilinear difference algorithm;
carrying out multi-scale scaling on the labeled CAD image, wherein the multi-scale scaling comprises the following steps:
and after the image expansion operation is carried out on the marked CAD image, carrying out multi-scale scaling on the marked CAD image after the image expansion operation by adopting a bilinear difference algorithm.
Optionally, the cutting the at least one scaled original CAD image and the original CAD image to obtain the first sub-image set, including:
according to a designated sliding route, adopting a cutting template with a first preset size to perform sliding cutting on the original CAD image to obtain a first cutting set;
performing the following graph cutting processing on each scaled original CAD image respectively: according to the designated sliding route, adopting a cutting template with a second preset size to perform sliding cutting on the zoomed original CAD image to obtain a second cutting set;
obtaining the first sub-image set based on the first cut map set and each second cut map set;
wherein the designated sliding route comprises at least one of:
a connecting line from the upper left vertex to the upper right vertex of the image;
a connecting line from the upper left vertex to the lower left vertex of the image;
a connecting line from the upper right vertex to the lower right vertex of the image;
a connecting line from a lower left vertex to a lower right vertex of the image;
a connecting line from the upper left vertex to the lower right vertex of the image;
the connecting line from the upper right vertex to the lower left vertex of the image.
Optionally, the cutting the at least one scaled labeled CAD image to obtain the second sub-image set includes:
according to the designated sliding route, adopting a cutting template with a third preset size to perform sliding cutting on the marked CAD image to obtain a third cutting set;
and respectively carrying out the following image cutting processing on each zoomed labeling CAD image: according to the appointed sliding route, adopting a cutting template with a fourth preset size to perform sliding cutting on the zoomed marked CAD image to obtain a fourth cutting set;
obtaining the second sub-image set based on the third cut picture set and each fourth cut picture set;
wherein the designated sliding route comprises at least one of:
a connecting line from the upper left vertex to the upper right vertex of the image;
a connecting line from the upper left vertex to the lower left vertex of the image;
a connecting line from the upper right vertex to the lower right vertex of the image;
a connecting line from a lower left vertex to a lower right vertex of the image;
a connecting line from the upper left vertex to the lower right vertex of the image;
the connecting line from the top right vertex to the bottom left vertex of the image.
Optionally, obtaining the first sub-image set based on the first cut map set and each of the second cut map sets includes:
acquiring the number of cutting patterns corresponding to each component to be identified from the first cutting pattern set and each second cutting pattern set;
taking the component to be identified with the number of the cut pictures less than the threshold value as a target component;
taking the cutting drawings of the target component in the first cutting drawing set and the second cutting drawing set as first target cutting drawings;
taking the target component as a center, and cutting the first target cutting chart by adopting a cutting chart template with a fifth preset size to obtain a fifth cutting chart set;
obtaining information entropy of all the cutting pictures in the fifth cutting picture set, obtaining a second target cutting picture of which the information entropy is larger than a set value, cutting the second target cutting picture by adopting a cutting picture template with a sixth preset size, and obtaining a sixth cutting picture set, wherein the positions of the target components in all the cutting pictures included in the sixth cutting picture set are randomly distributed;
and integrating the first cutout set, the second cutout set, the fifth cutout set and the sixth cutout set to obtain the first sub-image set.
Optionally, obtaining the second sub-image set based on the third cut map set and each of the fourth cut map sets, including:
acquiring the number of the cutting patterns corresponding to each component to be identified from the third cutting pattern set and each fourth cutting pattern set;
taking the component to be identified with the number of the cut pictures less than the threshold value as a target component;
taking the cutting drawings of the target component in the third cutting drawing set and the fourth cutting drawing set as third target cutting drawings;
taking the target component as a center, and cutting the third target cutting chart by adopting a cutting chart template with a seventh preset size to obtain a seventh cutting chart set;
obtaining information entropy of all the cutting pictures in the seventh cutting picture set, obtaining a fourth target cutting picture of which the information entropy is larger than a set value, and cutting the fourth target cutting picture by adopting a cutting picture template with an eighth preset size to obtain an eighth cutting picture set, wherein the positions of the target components in all the cutting pictures included in the eighth cutting picture set are randomly distributed;
and integrating the third map cutting set, the fourth map cutting set, the seventh map cutting set and the eighth map cutting set to obtain the second sub-image set.
In a second aspect, an embodiment of the present application provides a component identification method, including:
acquiring a CAD image to be identified;
acquiring a subimage set to be recognized based on the CAD image to be recognized;
inputting the sub-image set to be recognized into a component recognition model to obtain a component prediction result output by the component recognition model; wherein the component recognition model is obtained by training a neural network model for training sample data, and the training sample data is obtained by the method of the first aspect;
and identifying the component in the CAD image to be identified based on the component prediction result.
Optionally, obtaining a sub-image set to be recognized based on the CAD image to be recognized includes:
after the CAD image to be identified is subjected to multi-scale scaling, at least one scaled CAD image to be identified is obtained;
obtaining the sub-image set to be identified based on the at least one scaled CAD image to be identified and the CAD image to be identified;
wherein the scaling of the multi-scale scaling is greater than 0 and less than 1.
Optionally, obtaining the sub-image set to be recognized based on the at least one scaled CAD image to be recognized and the CAD image to be recognized includes:
respectively carrying out mirror symmetry on each zoomed CAD image to be identified to obtain at least one first sub-image set to be identified; one first sub-image set to be recognized comprises the zoomed image to be recognized and an image obtained after mirror symmetry, wherein the zoomed image to be recognized belongs to the same zooming scale;
cutting each zoomed CAD image to be identified and each zoomed CAD image to be identified respectively to obtain at least one first cut set and a second cut set corresponding to the CAD image to be identified; one first cut set comprises cut pictures of CAD images to be identified which belong to the same scaling;
after the tangent images in each first tangent image set and each second tangent image set are subjected to mirror symmetry, zooming the obtained tangent images to obtain at least one second sub-image set to be recognized; one second sub-image set to be recognized comprises cut pictures belonging to the same scaling scale;
and integrating the image to be recognized, each first sub-image set to be recognized and each second sub-image set to be recognized to obtain the sub-image set to be recognized.
Optionally, inputting the sub-image set to be recognized into a component recognition model, and obtaining a component prediction result output by the component recognition model, including:
and performing the following filtering processing on each cutting chart in the second sub-image set to be recognized: calculating the two-dimensional information entropy of the cut graph, and deleting the cut graph from the second sub-image set to be recognized when the obtained entropy value is smaller than a set entropy value;
and respectively inputting each first sub-image set to be recognized and each second sub-image set to be recognized into the component recognition model, and obtaining component prediction results corresponding to the first sub-image set to be recognized and the second sub-image set to be recognized.
Optionally, identifying a component in the CAD image to be identified based on the component prediction result includes:
acquiring a component prediction result corresponding to each first sub-image set to be recognized, taking the component prediction result as a first component prediction result, acquiring a component prediction result corresponding to the CAD image to be recognized, taking the component prediction result as a second component prediction result, and combining the first component prediction result and the second component prediction result to obtain a big image prediction result;
acquiring a component prediction result corresponding to each second sub-image set to be recognized as a third component prediction result, and combining the third component prediction results to obtain a small image prediction result;
and combining the large graph prediction result and the small graph prediction result to serve as the identification result of the component in the CAD image to be identified.
Optionally, the third component prediction result comprises a mark frame for identifying the component or part of the component;
combining each third component prediction result to obtain a small graph prediction result, wherein the small graph prediction result comprises the following steps:
performing the following processing on each third component prediction result respectively: converting the third component prediction result into a binary image with the same size as the CAD image to be identified, zooming the binary image by 0.5 times, and then filling colors into the zoomed marking frame;
performing connected domain analysis on the processed binary image corresponding to each third constructed prediction result to obtain a combined mark frame;
and scaling the merged mark frame by 2 times to obtain the small image prediction result.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor is configured to execute the program stored in the memory to implement the method of the first aspect, or to implement the method of the second aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, the CAD drawing is converted into the original CAD image, the marking frames are added to the complete component in the original CAD image in a segmented mode to obtain the marked CAD image, the marking frames of adjacent segments have overlapping areas, the marked CAD image and the original CAD image are cut respectively to obtain training sample data, and therefore the component identification model obtained by training the preset neural network model by adopting the training sample data can effectively identify the large component, the identification success rate of the large component is improved, and the probability of missing identification and incomplete identification is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
FIG. 1 is a schematic flow chart illustrating a method for obtaining sample data according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating comparison between an overall annotation and a segment annotation in the embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the zooming effect after the expansion operation in the embodiment of the present application;
FIG. 4 is a schematic diagram of the embodiment of the present application for cutting a large map in various ways;
FIG. 5 is a schematic structural diagram of yolov5x6 model in an embodiment of the present application;
FIG. 6 is a flow chart illustrating a method for identifying a component according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating a process of segment prediction and merging in an embodiment of the present application;
FIG. 8 is a diagram illustrating a multi-scale prediction process in an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an apparatus for obtaining sample data in an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a component recognition apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the application, in order to solve the problems of missing identification and incomplete identification of a large component in a CAD drawing, a method for acquiring sample data is provided, training is carried out on a preset neural network model by using the training sample data acquired by the method to obtain a component identification model, and the success rate of identifying the large component by using the component identification model is improved.
As shown in fig. 1, the method for acquiring sample data in the embodiment of the present application mainly includes the following steps:
step 101, obtaining an original CAD image and an annotated CAD image, wherein the annotated CAD image comprises annotated frames of all segments of a complete component, and the annotated frames of adjacent segments have overlapping areas.
The original CAD image is obtained by converting a CAD drawing. The format specifically adopted by the original CAD image is not limited.
The CAD drawing comprises at least one component, and the at least one component can be a large component with the aspect ratio larger than a threshold value, or can be a common component with the aspect ratio not larger than the threshold value.
In the embodiment of the application, considering that the large component is long and the complete detection of the large component by the component identification model is difficult, the original CAD image is specially processed, namely, the large component in the original CAD image is added with the labeling frame in a segmented labeling mode, so that the quality inspection of the labeling frame of each segment is ensured to have an overlapping area in a certain range. As shown in fig. 2, the left side of the diagram shows the effect diagram after the large component is completely labeled, and the right side of the diagram shows the effect diagram after the labeling frame is added in a segmented labeling manner.
102, cutting the original CAD image to obtain a first sub-image set, and cutting the marked CAD image to obtain a second sub-image set;
in an exemplary embodiment, the performing a cutting based on the original CAD image to obtain a first sub-image set, and performing a cutting based on the annotated CAD image to obtain a second sub-image set includes:
carrying out multi-scale scaling on the original CAD image to obtain at least one scaled original CAD image, and carrying out image cutting on the at least one scaled original CAD image and the original CAD image to obtain the first sub-image set; the first sub-image set comprises a cutting set corresponding to each zoomed original CAD image and a cutting set corresponding to the original CAD image;
performing multi-scale scaling on the labeling CAD image to obtain at least one scaled labeling CAD image, and performing image cutting on the at least one scaled labeling CAD image to obtain a second sub-image set; the second sub-image set comprises a cutting set corresponding to each zoomed labeling CAD image and a cutting set corresponding to the representing CAD image;
wherein the scaling of the multi-scale scaling is greater than 0 and less than 1.
In the embodiment of the application, considering that the large component is long and the complete detection of the large component by the component identification model is difficult, the original CAD image and the marked CAD image are zoomed by adopting multi-scale zooming, and the zoomed image, the original CAD image and the marked CAD image are cut, so that the component identification model can conveniently identify the complete component in the image zoomed by the multi-scale zooming and identify the segmented component in the image cut, and the identification success rate is improved.
In an exemplary embodiment, multi-scale scaling the original CAD image comprises: after the original CAD image is subjected to image expansion operation, the original CAD image subjected to image expansion operation is subjected to multi-scale scaling by adopting a bilinear difference algorithm;
carrying out multi-scale scaling on the labeled CAD image, wherein the multi-scale scaling comprises the following steps: and after the image expansion operation is carried out on the marked CAD image, carrying out multi-scale zooming on the marked CAD image subjected to the image expansion operation by adopting a bilinear difference algorithm.
The original CAD image and the marked CAD image are subjected to multi-scale scaling, the scaling ratio is between 0 and 1, the CAD image is considered to comprise a large number of lines, the scaling is carried out by using a bilinear difference algorithm during scaling, and meanwhile, in order to prevent the distortion of the line image after scaling to the maximum extent, the image expansion operation is carried out on the image before scaling, for example, the expansion operation is carried out on the image by using structural elements with the size of 5x 5. As shown in fig. 3, the left side shows a local image of the original, the middle part shows an effect obtained by performing the dilation operation 1 time on the local image, and the right side shows an effect obtained by performing the 0.5-fold scaling on the local image after the dilation operation.
In an exemplary embodiment, cutting the at least one scaled original CAD image and the original CAD image to obtain the first sub-image set includes:
according to a designated sliding route, adopting a cutting template with a first preset size to perform sliding cutting on the original CAD image to obtain a first cutting set; respectively carrying out the following image cutting processing on each zoomed original CAD image: according to the designated sliding route, adopting a cutting template with a second preset size to perform sliding cutting on the zoomed original CAD image to obtain a second cutting set; obtaining the first sub-image set based on the first cut picture set and each second cut picture set;
wherein the designated sliding route comprises at least one of:
a connecting line from the upper left vertex to the upper right vertex of the image;
a connecting line from the upper left vertex to the lower left vertex of the image;
a connecting line from the upper right vertex to the lower right vertex of the image;
a connecting line from a lower left vertex to a lower right vertex of the image;
a connecting line from the upper left vertex to the lower right vertex of the image;
the connecting line from the upper right vertex to the lower left vertex of the image.
In an exemplary embodiment, the cutting the at least one scaled labeled CAD image to obtain the second sub-image set includes:
according to the designated sliding route, a cutting template with a third preset size is adopted to perform sliding cutting on the marked CAD image to obtain a third cutting set; and respectively carrying out the following image cutting processing on each zoomed labeling CAD image: according to the appointed sliding route, adopting a cutting template with a fourth preset size to perform sliding cutting on the zoomed marked CAD image to obtain a fourth cutting set; obtaining the second sub-image set based on the third cut map sets and the fourth cut map sets;
wherein the designated sliding route comprises at least one of:
a connecting line from the upper left vertex to the upper right vertex of the image;
a connecting line from the upper left vertex to the lower left vertex of the image;
a connecting line from the upper right vertex to the lower right vertex of the image;
a connecting line from a lower left vertex to a lower right vertex of the image;
a connecting line from the upper left vertex to the lower right vertex of the image;
the connecting line from the top right vertex to the bottom left vertex of the image.
In the embodiment of the application, considering that the engineering drawing often reaches ten thousand resolution ratios after rasterization of the image and is inconvenient to use directly, the method of cutting the image is adopted, the large image is cut into small images, and the members in the small images are identified and then merged to obtain the large image identification result. Here, a sliding graph cut is performed along six connecting lines consisting of the upper left vertex, the lower left vertex, the upper right vertex and the lower right vertex of the image, the size of the graph cut is assumed to be 1792 pixels by 1792 pixels, and a small graph obtained through the graph cut is used as a part of training sample data.
In an exemplary embodiment, obtaining the first sub-image set based on the first cut map set and each of the second cut map sets includes:
acquiring the number of cutting images corresponding to each component to be identified from the first cutting image set and each second cutting image set; taking the component to be identified with the number of the cut pictures less than the threshold value as a target component; taking the cutting drawings of the target component in the first cutting drawing set and the second cutting drawing set as first target cutting drawings; taking the target component as a center, and cutting the first target cutting chart by adopting a cutting chart template with a fifth preset size to obtain a fifth cutting chart set; obtaining information entropy of each cutting chart in the fifth cutting chart set, obtaining a second target cutting chart of which the information entropy is larger than a set value, cutting the second target cutting chart by adopting a cutting chart template with a sixth preset size, and obtaining a sixth cutting chart set, wherein the positions of the target components in each cutting chart included in the sixth cutting chart set are randomly distributed; and integrating the first map cutting set, the second map cutting set, the fifth map cutting set and the sixth map cutting set to obtain the first sub-image set.
In an exemplary embodiment, obtaining the second sub-image set based on the third cut map set and each of the fourth cut map sets includes:
acquiring the number of the cutting patterns corresponding to each component to be identified from the third cutting pattern set and each fourth cutting pattern set; taking the component to be identified with the cut number less than the threshold value as a target component;
taking the third cutting drawing set and the fourth cutting drawing set which comprise the cutting drawing of the target component as third target cutting drawings; taking the target component as a center, and cutting the third target cutting chart by adopting a cutting chart template with a seventh preset size to obtain a seventh cutting chart set; obtaining information entropy of all the cutting pictures in the seventh cutting picture set, obtaining a fourth target cutting picture of which the information entropy is larger than a set value, and cutting the fourth target cutting picture by adopting a cutting picture template with an eighth preset size to obtain an eighth cutting picture set, wherein the positions of the target components in all the cutting pictures included in the eighth cutting picture set are randomly distributed; and integrating the third map cutting set, the fourth map cutting set, the seventh map cutting set and the eighth map cutting set to obtain the second sub-image set.
In order to avoid unbalance of the number of the cut pictures corresponding to the component to be identified, the target component with the smaller number of the cut pictures is cut again by adopting a center cutting method, the center cutting method takes the target component as the center, the cut pictures with the size of 1792 pixels by 1792 pixels are cut, meanwhile, in order to make up for the image information of the cut pictures to the maximum extent, the information entropy of the cut pictures is calculated, if the information entropy is larger than a set value (such as 0.3), the center cutting is carried out for multiple times, and the position of the target component in the cut pictures after cutting is random. Fig. 4 is a schematic diagram of a cut obtained by performing multi-way cropping (multi-start cropping + center cropping + entropy filtering) on a large graph.
Step 103, taking the first sub-image set and the second sub-image set as training sample data, wherein the training sample data is used for training a preset neural network model to obtain a component identification model.
In consideration of video memory and speed, the neural network model preset in the embodiment of the application adopts a yolov5x6 model as a basic model, and the structure of the yolov5x6 model is specifically shown in fig. 5. The yolov5x6 model includes an Input (Input), a backhaul, a tack (PANet), and an Output (Output). In order to identify a large component, in the embodiment of the present application, the number of bottleeck structures is increased in the bottleeck csp part of the backhaul, and the network depth is further increased, that is, the backhaul includes no less than a set number of bottleeck structures. Because the Bottleneck structure is a residual structure, the problem of gradient explosion can not occur in a deepened structure model. And, long-tailed target detection loss (loss) is adopted during training to alleviate the problem of data imbalance. And, in order to adapt to the member with drastic size change, adopt the multiscale sample data training model, and carry out online data enhancement to the training sample data, including but not limited to carrying out HSV color transformation, cyclic mirror image, mosaic (mosaic) and mixed (mixup) enhancement, etc. to the training sample data.
Based on the same concept, the embodiment of the application also provides a training method of the component recognition model, and the method mainly comprises the following steps: acquiring training sample data, wherein the training sample data is acquired by adopting the method for acquiring the sample data, and the description is omitted here; and training a preset neural network model by using the training sample data to obtain a component identification model. The specific neural network model adopted may refer to the related description in the embodiment of the method for obtaining sample data, and is not described herein again.
Based on the same technical concept, the embodiment of the application provides a component identification method, the method adopts a component identification model to identify a component in a CAD image to be identified, and the component identification model is obtained by training the training sample data obtained by the embodiment.
As shown in fig. 6, in the embodiment of the present application, the component identification method mainly includes the following steps:
step 601, obtaining a CAD image to be identified.
The CAD image to be recognized is obtained by converting a CAD drawing of the component to be recognized. The specific format used for the CAD image is not limited.
The CAD drawing comprises at least one component, and the at least one component can be a large component with the aspect ratio larger than a threshold value, or can be a common component with the aspect ratio not larger than the threshold value.
Step 602, obtaining a sub-image set to be identified based on the CAD image to be identified.
In an exemplary embodiment, obtaining a sub-image set to be recognized based on the CAD image to be recognized includes: after the CAD image to be identified is subjected to multi-scale scaling, at least one scaled CAD image to be identified is obtained; obtaining the sub-image set to be recognized based on the at least one scaled CAD image to be recognized and the CAD image to be recognized; wherein the scaling of the multi-scale scaling is greater than 0 and less than 1.
In an exemplary embodiment, obtaining the sub-image set to be recognized based on the at least one scaled CAD image to be recognized and the CAD image to be recognized includes:
respectively carrying out mirror symmetry on each zoomed CAD image to be identified to obtain at least one first sub-image set to be identified; one first to-be-identified sub-image set comprises the zoomed to-be-identified image belonging to the same zoom scale and an image obtained after mirror symmetry; cutting each zoomed CAD image to be identified and each zoomed CAD image to be identified respectively to obtain at least one first cut set and a second cut set corresponding to the CAD image to be identified; one first cut set comprises cut pictures of CAD images to be identified which belong to the same scaling; after the tangent images in each first tangent image set and each second tangent image set are subjected to mirror symmetry, zooming the obtained tangent images to obtain at least one second sub-image set to be recognized; one second sub-image set to be recognized comprises cut pictures belonging to the same scaling scale; and integrating the image to be recognized, each first sub-image set to be recognized and each second sub-image set to be recognized to obtain the sub-image set to be recognized.
For example, the CAD image to be identified is scaled by 1.0, 0.76, and 0.52 in three dimensions, respectively, and the dimensions of 1.0 are the CAD image to be identified itself. And performing left-right and up-down mirror symmetry on the scale smaller than 1.0, and taking the obtained image and the CAD image to be identified together as a first sub-image set to be identified on the large image level. And after cutting the CAD image to be identified and each zoomed large image such as the zoomed CAD image to be identified, zooming the obtained cut image, assuming that zooming of three scales is performed by 1.0, 0.8 and 0.6, zooming of 0.8 and 0.6, performing left-right and up-down mirror symmetry, and taking the obtained image and the cut image with the zooming scale of 1.0 as a second sub-image set to be identified on a small image level. And enriching the images in the sub-image set to be recognized through mirror symmetry, so that the recognition rate of the component model can be improved.
Step 603, inputting the sub-image set to be recognized into a component recognition model to obtain a component prediction result output by the component recognition model; the component identification model is obtained by training the neural network model for training sample data, and the training sample data is obtained by adopting the method for obtaining the sample data provided by the embodiment.
In an exemplary embodiment, inputting the sub-image set to be recognized into a component recognition model, and obtaining a component prediction result output by the component recognition model, includes:
and performing the following filtering processing on each tangent map in the second sub-image set to be recognized: calculating the two-dimensional information entropy of the cut graph, and deleting the cut graph from the second sub-image set to be recognized when the obtained entropy value is smaller than a set entropy value; and respectively inputting each first sub-image set to be recognized and each second sub-image set to be recognized into the component recognition model, and obtaining component prediction results corresponding to the first sub-image set to be recognized and the second sub-image set to be recognized.
In the embodiment of the application, considering that a large graph is cut to obtain hundreds of small graphs of 1792 pixels, if each small graph is subjected to multiple predictions, the time consumption of the component identification model is greatly increased, and some small graphs comprise relatively few lines or no lines and contain little information, so that in order to improve the prediction speed of the component identification model, before the cut graphs (namely the small graphs) in the second sub-image set to be identified are predicted, the two-dimensional information entropy of the graphs is calculated, the multi-scale prediction is not performed on the graphs with the entropy values smaller than the set threshold (such as 0.05), the prediction time consumption of the component identification model is reduced, and the prediction time consumption of the component identification model can be reduced by about 30% through experimental measurement in this way.
And step 604, identifying the component in the CAD image to be identified based on the component prediction result.
In an exemplary embodiment, identifying the component in the CAD image to be identified based on the component prediction result includes: acquiring a component prediction result corresponding to each first sub-image set to be recognized, taking the component prediction result as a first component prediction result, acquiring a component prediction result corresponding to the CAD image to be recognized, taking the component prediction result as a second component prediction result, and combining the first component prediction result and the second component prediction result to obtain a big image prediction result; acquiring a component prediction result corresponding to each second sub-image set to be recognized as a third component prediction result, and combining the third component prediction results to obtain a small image prediction result; and combining the large image prediction result and the small image prediction result to serve as the recognition result of the component in the CAD image to be recognized.
In an exemplary embodiment, the third component prediction result includes a marker box identifying the component or part of the component; merging each third component prediction result to obtain a small graph prediction result, wherein the small graph prediction result comprises the following steps: performing the following processing on each third component prediction result respectively: converting the third component prediction result into a binary image with the same size as the CAD image to be recognized, zooming the binary image by 0.5 times, and then filling colors in the zoomed marking frame; performing connected domain analysis on the processed binary image corresponding to each third constructed prediction result to obtain a combined mark frame; and scaling the merged mark frame by 2 times to obtain the small image prediction result.
It should be noted that, in order to further improve the identification rate of the oversized component, an idea of breaking up into parts is adopted, a part of the component is predicted each time, and then each part of the component is combined to obtain the identification result of the oversized component. Fig. 7 is a schematic diagram of a segmented prediction and merging process, which mainly includes the following steps:
a step of sectional detection: the component identification model detects large component segments, a mark frame (denoted as a bounding Box, abbreviated as bbox, and specifically a bounding Box or a circumscribed rectangle) of each detected segment is reserved, the more the bbox of the segment is, the more accurate the obtained detection result is, however, the calculated amount is increased, in order to balance accuracy and calculated amount, the number of the bbox of the segment can be controlled within a set number range, namely, not less than a first preset number and not more than a second preset number, and the first preset number is less than the second preset number.
bbox converting a binary image: in order to quickly and accurately merge a large number of segments bbox obtained by segment detection into a plurality of large bboxs, sub-images to which the plurality of segments bboxs belong are converted into binary images in proportion to the original image (namely, the CAD image to be identified), the binary images are scaled by 0.5 times, and the insides of the bboxs are filled.
Calculating the combined bbox: and performing connected domain analysis on the binary image, combining the connected regions to obtain a combined bbox, and performing up-sampling and 2-time scaling on the coordinate of the combined bbox to obtain a final identification result.
Compared with a mode of directly calculating the IOU (cross-over ratio) of the sub-image to which the bbox belongs, the method for combining the bbox can reduce the consumed time by 90%, and the final identification result is obtained after the nms (non-maximum suppression) operation is carried out on the result obtained by combining the segmentation detection and the bbox.
In the multi-scale prediction process in the embodiment of the present application, which is shown in fig. 8, in the multi-scale prediction process in the large-scale image level, scaling the CAD image to be recognized in three scales, where each scale corresponds to one prediction result, and performing nms combination on the three prediction results to obtain the prediction result in the large-scale image level; in the small graph level multi-scale prediction process, after a large graph is cut, scaling three scales on each cut graph, wherein each scale correspondingly obtains a prediction result of one cut graph, and performing nms combination on the prediction results of the cut graphs to obtain a prediction result of a small graph level; and finally, performing nms combination on the prediction result of the large graph level and the prediction result of the small graph level to obtain a final component identification result.
In the embodiment of the application, the CAD drawing is converted into the original CAD image, the marking frames are added to the complete component in the original CAD image in a segmented mode to obtain the marked CAD image, the marking frames of adjacent segments have overlapping areas, the marked CAD image and the original CAD image are cut respectively to obtain training sample data, and therefore the component identification model obtained by training the preset neural network model by adopting the training sample data can effectively identify the large component, the identification success rate of the large component is improved, and the probability of missing identification and incomplete identification is reduced.
After the sub-image set to be recognized is obtained based on the CAD image to be recognized, the component recognition model obtained through training of training sample data provided by the method is used for recognizing the sub-image set to be recognized, a component prediction result is obtained, the component in the CAD image to be recognized is recognized through the component prediction result, and the component recognition accuracy is improved.
And processing the CAD image to be identified to obtain a first sub-image set to be identified on a large map level and a second sub-image set to be identified on a small map level, performing overall identification on the first sub-image set to be identified on the large map level to obtain an overall detection result, identifying the second sub-image set to be identified on the small map level to obtain a local detection result, combining the local detection results to obtain an integrated overall detection result, and combining the overall detection result and the integrated overall detection result to obtain a finally identified component, so that the overall and local components can be considered, and the component identification accuracy is further improved.
Based on the same concept, an embodiment of the present application provides an apparatus for obtaining sample data, and specific implementation of the apparatus may refer to the description in the method embodiment section, and repeated details are not repeated, as shown in fig. 9, the apparatus mainly includes:
an obtaining module 801, configured to obtain an original CAD image and an annotated CAD image, where the annotated CAD image includes an annotated frame of each segment of a complete component, and the annotated frames of adjacent segments have an overlapping region;
a drawing cutting module 802, configured to cut a drawing based on the original CAD image to obtain a first sub-image set, and cut a drawing based on the labeled CAD image to obtain a second sub-image set;
an integrating module 803, configured to use the first sub-image set and the second sub-image set as training sample data, where the training sample data is used to train a preset neural network model to obtain a component identification model.
Based on the same concept, the embodiment of the present application provides a component identification apparatus, and the specific implementation of the apparatus may refer to the description of the method embodiment section, and repeated descriptions are omitted, as shown in fig. 10, the apparatus mainly includes:
an obtaining module 901, which obtains a CAD image to be identified;
a processing module 902, configured to obtain a sub-image set to be recognized based on the CAD image to be recognized;
a prediction module 903, configured to input the sub-image set to be recognized into a component recognition model, and obtain a component prediction result output by the component recognition model; the component identification model is obtained by training a neural network model for training sample data, and the training sample data is obtained by adopting the method in the embodiment;
an identifying module 904, configured to identify a component in the CAD image to be identified based on the component prediction result.
Based on the same concept, an embodiment of the present application further provides an electronic device, as shown in fig. 11, where the electronic device mainly includes: a processor 1001, a memory 1002, and a communication bus 1003, wherein the processor 1001 and the memory 1002 communicate with each other via the communication bus 1003. The memory 1002 stores therein a program executable by the processor 1001, and the processor 1001 executes the program stored in the memory 1002 to implement the steps described in the above method embodiments.
The communication bus 1003 mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1003 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but this is not intended to represent only one bus or type of bus.
The Memory 1002 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one storage device located remotely from the aforementioned processor 1001.
The Processor 1001 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
In a further embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the method steps described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is merely illustrative of particular embodiments of the invention that enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A method for obtaining sample data, comprising:
acquiring an original CAD image and an annotation CAD image, wherein the annotation CAD image comprises annotation frames of all sections of a complete component, and the annotation frames of adjacent sections have overlapping areas;
cutting the original CAD image to obtain a first sub-image set, and cutting the marked CAD image to obtain a second sub-image set;
and taking the first sub-image set and the second sub-image set as training sample data, wherein the training sample data is used for training a preset neural network model to obtain a component recognition model.
2. The method of claim 1, wherein the cropping based on the original CAD image to obtain a first set of sub-images, and the cropping based on the annotated CAD image to obtain a second set of sub-images, comprises:
carrying out multi-scale scaling on the original CAD image to obtain at least one scaled original CAD image, and cutting the at least one scaled original CAD image and the original CAD image to obtain the first sub-image set; the first sub-image set comprises a cutting set corresponding to each zoomed original CAD image and a cutting set corresponding to the original CAD image;
performing multi-scale scaling on the labeling CAD image to obtain at least one scaled labeling CAD image, and performing image cutting on the at least one scaled labeling CAD image to obtain a second sub-image set; the second sub-image set comprises a cutting set corresponding to each zoomed labeling CAD image and a cutting set corresponding to the representation CAD image;
wherein the scaling of the multi-scale scaling is greater than 0 and less than 1.
3. The method of claim 2, wherein multi-scaling the original CAD image comprises:
after the original CAD image is subjected to image expansion operation, the original CAD image subjected to image expansion operation is subjected to multi-scale scaling by adopting a bilinear difference algorithm;
carrying out multi-scale scaling on the labeled CAD image, wherein the multi-scale scaling comprises the following steps:
and after the image expansion operation is carried out on the marked CAD image, carrying out multi-scale scaling on the marked CAD image after the image expansion operation by adopting a bilinear difference algorithm.
4. The method of claim 2, wherein the cropping the at least one scaled original CAD image and the original CAD image to obtain the first set of sub-images comprises:
according to a designated sliding route, adopting a cutting template with a first preset size to perform sliding cutting on the original CAD image to obtain a first cutting set;
performing the following graph cutting processing on each scaled original CAD image respectively: according to the designated sliding route, adopting a cutting template with a second preset size to perform sliding cutting on the zoomed original CAD image to obtain a second cutting set;
obtaining the first sub-image set based on the first cut map set and each second cut map set;
wherein the designated sliding route comprises at least one of:
a connecting line from the upper left vertex to the upper right vertex of the image;
a connecting line from the upper left vertex to the lower left vertex of the image;
a connecting line from the upper right vertex to the lower right vertex of the image;
a connecting line from a lower left vertex to a lower right vertex of the image;
a connecting line from the upper left vertex to the lower right vertex of the image;
the connecting line from the upper right vertex to the lower left vertex of the image.
5. The method of claim 2, wherein the cropping the at least one scaled annotated CAD image to obtain the second set of sub-images comprises:
according to the designated sliding route, a cutting template with a third preset size is adopted to perform sliding cutting on the marked CAD image to obtain a third cutting set;
and respectively carrying out the following image cutting processing on each zoomed labeling CAD image: according to the appointed sliding route, adopting a cutting template with a fourth preset size to perform sliding cutting on the zoomed marked CAD image to obtain a fourth cutting set;
obtaining the second sub-image set based on the third cut picture set and each fourth cut picture set;
wherein the designated sliding route comprises at least one of:
a connecting line from the upper left vertex to the upper right vertex of the image;
a connecting line from the upper left vertex to the lower left vertex of the image;
a connecting line from the upper right vertex to the lower right vertex of the image;
a connecting line from a lower left vertex to a lower right vertex of the image;
a connecting line from the upper left vertex to the lower right vertex of the image;
the connecting line from the upper right vertex to the lower left vertex of the image.
6. The method according to claim 4, wherein obtaining the first set of sub-images based on the first set of cutmaps and each of the second set of cutmaps comprises:
acquiring the number of cutting images corresponding to each component to be identified from the first cutting image set and each second cutting image set;
taking the component to be identified with the cut number less than the threshold value as a target component;
taking the cutting drawings of the target component in the first cutting drawing set and the second cutting drawing set as first target cutting drawings;
taking the target component as a center, and cutting the first target cutting chart by adopting a cutting chart template with a fifth preset size to obtain a fifth cutting chart set;
obtaining information entropy of each cutting chart in the fifth cutting chart set, obtaining a second target cutting chart of which the information entropy is larger than a set value, cutting the second target cutting chart by adopting a cutting chart template with a sixth preset size, and obtaining a sixth cutting chart set, wherein the positions of the target components in each cutting chart included in the sixth cutting chart set are randomly distributed;
and integrating the first cutout set, the second cutout set, the fifth cutout set and the sixth cutout set to obtain the first sub-image set.
7. The method of claim 5, wherein obtaining the second set of sub-images based on the third set of cutmaps and each of the fourth set of cutmaps comprises:
acquiring the number of the cutting patterns corresponding to each component to be identified from the third cutting pattern set and each fourth cutting pattern set;
taking the component to be identified with the cut number less than the threshold value as a target component;
taking the cutting drawings of the target component in the third cutting drawing set and the fourth cutting drawing set as third target cutting drawings;
taking the target component as a center, and cutting the third target cutting chart by adopting a cutting chart template with a seventh preset size to obtain a seventh cutting chart set;
obtaining information entropy of all the cutting pictures in the seventh cutting picture set, obtaining a fourth target cutting picture of which the information entropy is larger than a set value, and cutting the fourth target cutting picture by adopting a cutting picture template with an eighth preset size to obtain an eighth cutting picture set, wherein the positions of the target components in all the cutting pictures included in the eighth cutting picture set are randomly distributed;
and integrating the third map cutting set, the fourth map cutting set, the seventh map cutting set and the eighth map cutting set to obtain the second sub-image set.
8. A component identification method, comprising:
acquiring a CAD image to be identified;
acquiring a sub-image set to be identified based on the CAD image to be identified;
inputting the sub-image set to be recognized into a component recognition model to obtain a component prediction result output by the component recognition model; wherein, the component recognition model is obtained by training a neural network model for training sample data, and the training sample data is obtained by adopting the method of any one of claims 1 to 7;
and identifying the component in the CAD image to be identified based on the component prediction result.
9. The method according to claim 8, wherein obtaining a set of sub-images to be identified based on the CAD image to be identified comprises:
after the CAD image to be identified is subjected to multi-scale scaling, at least one scaled CAD image to be identified is obtained;
obtaining the sub-image set to be recognized based on the at least one scaled CAD image to be recognized and the CAD image to be recognized;
wherein the scaling of the multi-scale scaling is greater than 0 and less than 1.
10. The method according to claim 9, wherein deriving the set of sub-images to be identified based on the at least one scaled CAD image to be identified and the CAD image to be identified comprises:
respectively carrying out mirror symmetry on each zoomed CAD image to be identified to obtain at least one first sub-image set to be identified; one first to-be-identified sub-image set comprises the zoomed to-be-identified image belonging to the same zoom scale and an image obtained after mirror symmetry;
cutting each zoomed CAD image to be identified and each zoomed CAD image to be identified respectively to obtain at least one first cutting set and a second cutting set corresponding to the CAD image to be identified; one first cut set comprises cut pictures of CAD images to be identified which belong to the same scaling;
after the tangent images in each first tangent image set and each second tangent image set are subjected to mirror symmetry, zooming the obtained tangent images to obtain at least one second sub-image set to be recognized; one second sub-image set to be recognized comprises cut pictures belonging to the same scaling scale;
and integrating the image to be recognized, each first sub-image set to be recognized and each second sub-image set to be recognized to obtain the sub-image set to be recognized.
11. The method according to claim 10, wherein inputting the sub-image set to be recognized into a component recognition model to obtain a component prediction result output by the component recognition model comprises:
and performing the following filtering processing on each cutting chart in the second sub-image set to be recognized: calculating the two-dimensional information entropy of the cut graph, and deleting the cut graph from the second sub-image set to be recognized when the obtained entropy value is smaller than a set entropy value;
and respectively inputting each first sub-image set to be recognized and each second sub-image set to be recognized into the component recognition model, and obtaining component prediction results corresponding to the first sub-image set to be recognized and the second sub-image set to be recognized.
12. The method of claim 11, wherein identifying the component in the CAD image to be identified based on the component prediction result comprises:
acquiring a component prediction result corresponding to each first sub-image set to be recognized as a first component prediction result, acquiring a component prediction result corresponding to the CAD image to be recognized as a second component prediction result, and combining the first component prediction result and the second component prediction result to obtain a big image prediction result;
acquiring a component prediction result corresponding to each second sub-image set to be recognized as a third component prediction result, and combining the third component prediction results to obtain a small image prediction result;
and combining the large graph prediction result and the small graph prediction result to serve as the identification result of the component in the CAD image to be identified.
13. The method of claim 12, wherein the third component prediction result comprises a marker box identifying the component or part of the component;
merging each third component prediction result to obtain a small graph prediction result, wherein the small graph prediction result comprises the following steps:
respectively carrying out the following processing on each third component prediction result: converting the third component prediction result into a binary image with the same size as the CAD image to be identified, zooming the binary image by 0.5 times, and then filling colors into the zoomed marking frame;
performing connected domain analysis on the processed binary image corresponding to each third constructed prediction result to obtain a combined mark frame;
and scaling the merged mark frame by 2 times to obtain the small image prediction result.
14. An electronic device, comprising: the system comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor, configured to execute a program stored in the memory, to implement the method of any one of claims 1-7, or to implement the method of any one of claims 8-13.
CN202211160385.9A 2022-09-22 2022-09-22 Sample data acquisition and component identification method and electronic equipment Pending CN115661851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211160385.9A CN115661851A (en) 2022-09-22 2022-09-22 Sample data acquisition and component identification method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211160385.9A CN115661851A (en) 2022-09-22 2022-09-22 Sample data acquisition and component identification method and electronic equipment

Publications (1)

Publication Number Publication Date
CN115661851A true CN115661851A (en) 2023-01-31

Family

ID=84986470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211160385.9A Pending CN115661851A (en) 2022-09-22 2022-09-22 Sample data acquisition and component identification method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115661851A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116189194A (en) * 2023-04-27 2023-05-30 北京中昌工程咨询有限公司 Drawing enhancement segmentation method for engineering modeling

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116189194A (en) * 2023-04-27 2023-05-30 北京中昌工程咨询有限公司 Drawing enhancement segmentation method for engineering modeling

Similar Documents

Publication Publication Date Title
JP6705912B2 (en) Method and apparatus for recognizing character areas in an image
CN110309824B (en) Character detection method and device and terminal
CN110502985B (en) Form identification method and device and form identification equipment
CN111080660A (en) Image segmentation method and device, terminal equipment and storage medium
CN113936195B (en) Sensitive image recognition model training method and device and electronic equipment
US11748865B2 (en) Hierarchical image decomposition for defect detection
CN115861400B (en) Target object detection method, training device and electronic equipment
CN111563505A (en) Character detection method and device based on pixel segmentation and merging
CN115661851A (en) Sample data acquisition and component identification method and electronic equipment
CN115147403A (en) Method and device for detecting liquid pollutants, electronic equipment and medium
CN110349138B (en) Target object detection method and device based on example segmentation framework
CN111461070A (en) Text recognition method and device, electronic equipment and storage medium
JP2022549394A (en) Systems and methods for detecting anomalies in public infrastructure using context-aware semantic computer vision techniques
CN113205024B (en) Engineering drawing preprocessing method and device, electronic equipment and storage medium
CN114445844A (en) Plate member identification method, device, equipment and storage medium
US11423611B2 (en) Techniques for creating, organizing, integrating, and using georeferenced data structures for civil infrastructure asset management
CN114429640A (en) Drawing segmentation method and device and electronic equipment
CN108804978B (en) Layout analysis method and device
Tang et al. Automatic structural scene digitalization
CN117237648A (en) Training method, device and equipment of semantic segmentation model based on context awareness
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
JPH0256707B2 (en)
CN116703904A (en) Image-based steel bar quantity detection method, device, equipment and medium
CN116453142A (en) Identification method, identification device, electronic equipment and computer medium
CN110969602A (en) Image definition detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination