CN114898357A - Defect identification method and device, electronic equipment and computer readable storage medium - Google Patents

Defect identification method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114898357A
CN114898357A CN202210812741.4A CN202210812741A CN114898357A CN 114898357 A CN114898357 A CN 114898357A CN 202210812741 A CN202210812741 A CN 202210812741A CN 114898357 A CN114898357 A CN 114898357A
Authority
CN
China
Prior art keywords
feature
image
level
current
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210812741.4A
Other languages
Chinese (zh)
Other versions
CN114898357B (en
Inventor
王远
刘枢
吕江波
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202210812741.4A priority Critical patent/CN114898357B/en
Publication of CN114898357A publication Critical patent/CN114898357A/en
Application granted granted Critical
Publication of CN114898357B publication Critical patent/CN114898357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a defect identification method, a defect identification device, electronic equipment and a computer readable storage medium, and relates to the technical field of precision instrument detection. The method comprises the following steps: acquiring a point cloud image corresponding to an industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflection intensity and preset point cloud characteristics; determining a characteristic image corresponding to the point cloud image according to the characteristic information of each pixel point; carrying out data enhancement processing on the characteristic image to obtain a processed characteristic image; and inputting the processed characteristic image into a trained defect recognition model for processing, and outputting a defect region image contained in the processed characteristic image and a defect type corresponding to the defect region image. By the adoption of the method and the device, the defect detection efficiency of the industrial instrument can be improved.

Description

Defect identification method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of precision instrument detection technologies, and in particular, to a defect identification method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of industrial intelligence, artificial intelligence plays an increasingly important role in the field of industrial intelligence. In the field of high-precision industry, such as new energy and precision instrument manufacturing, the requirement of high quality detection precision leads to great demands on the working strength, time and personnel number of quality inspection personnel in the industry. For example, in the lithium battery industry of new energy, a slight bulge or pit can cause the late explosion of the lithium battery, so that each battery has extremely strict detection requirements, and for a factory, the detection efficiency of personnel becomes the central importance of the production efficiency, so that how to improve the detection efficiency becomes an increasingly important problem in many high-precision machining industrial fields.
In the traditional technology, the category of each pixel in a three-dimensional point cloud image of an instrument to be detected is positioned and given through a 3D semantic segmentation algorithm, and the image is screened according to the category of each pixel in the image to obtain an unsatisfactory device.
However, the conventional 3D semantic segmentation algorithm is applied to the detection of industrial instruments, and has a problem of low efficiency.
Disclosure of Invention
The application provides a defect identification method, a defect identification device, an electronic device and a computer-readable storage medium, which can improve the defect detection efficiency of an industrial instrument.
In a first aspect, the present application provides a defect identification method, including:
acquiring a point cloud image corresponding to an industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflection intensity and preset point cloud characteristics;
determining a characteristic image corresponding to the point cloud image according to the characteristic information of each pixel point;
carrying out data enhancement processing on the characteristic image to obtain a processed characteristic image;
and inputting the processed characteristic image into a trained defect recognition model for processing, and outputting a defect region image contained in the processed characteristic image and a defect type corresponding to the defect region image.
In a second aspect, the present application further provides a defect identification apparatus, including:
the acquisition module is used for acquiring a point cloud image corresponding to the industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflection intensity and preset point cloud characteristics;
the determining module is used for determining a characteristic image corresponding to the point cloud image according to the characteristic information of each pixel point;
the processing module is used for carrying out data enhancement processing on the characteristic image to obtain a processed characteristic image;
and the recognition module is used for inputting the processed characteristic image into the trained defect recognition model for processing, and outputting a defect area image contained in the processed characteristic image and a defect type corresponding to the defect area image.
In a third aspect, the present application further provides an electronic device, where the electronic device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium, having a computer program stored thereon, which, when executed by a processor, performs the steps of the method described above.
According to the method and the device, the point cloud image corresponding to the industrial instrument to be identified is obtained, the characteristic image corresponding to the point cloud image is determined by utilizing the characteristic information of each pixel point in the point cloud image, and meanwhile data enhancement processing is carried out on the characteristic image, so that the processed characteristic image can be input into a trained defect identification model, and a defect area image contained in the characteristic image and a defect category corresponding to the defect area image are obtained. The method maps the depth information, the reflection intensity and the point cloud features in the three-dimensional data to the reconstructed feature images, and performs data enhancement processing on the feature images, so that the dimension reduction of the data is realized, the data volume in the processing process is reduced, compared with the traditional three-dimensional point cloud semantic segmentation technology, the method utilizes the data after the dimension reduction to identify the defects of the industrial instrument, and can realize the improvement of the defect detection efficiency aiming at the industrial instrument.
Drawings
Fig. 1 is an application environment diagram of a defect identification method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a defect identification method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another defect identification method according to an embodiment of the present application;
FIG. 4 is a diagram illustrating various network features in one particular embodiment;
fig. 5 is a schematic flowchart of another defect identification method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another defect identification method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another defect identification method according to an embodiment of the present application;
fig. 8 is a block diagram illustrating a defect identification apparatus according to an embodiment of the present disclosure;
fig. 9 is an internal structural diagram of an electronic device according to an embodiment of the present application;
fig. 10 is an internal structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The main pain point of the high precision industry is that there are strict requirements on the depth/height of the defect on the surface (for example, how much height the dent/protrusion cannot exceed, and how much height the wrinkle cannot exceed), and most of the defects are related to the height, which results in that the general 2D image cannot meet the requirement of defect detection. Therefore, most manufacturers introduce 3D point cloud data to obtain defect height information of industrial instruments, that is, obtain and classify the point cloud data by using a 3D semantic segmentation algorithm. The essence of the semantic segmentation algorithm is to locate and present the class of each pixel in the picture, and its expressive power depends mainly on the special design of the network structure. Most of the existing semantic segmentation algorithms cannot be well used in the industrial field at present, and the problems of over-killing or over-killing are serious. Especially, the semantic segmentation algorithm related to 3D is limited by the requirement of processing time, and the problem of low efficiency exists when the 3D semantic segmentation algorithm is applied to the detection of industrial instruments.
For the above reasons, the embodiment of the present application provides a defect identification method, which may be applied in the application environment shown in fig. 1. Wherein the electronic device 102 communicates with the server 104 over a communication network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The electronic device 102 may be connected with a 3D line scanning device. The electronic device 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices. A 3D line scanning device may be used to acquire 3D point cloud data. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In some embodiments, as shown in fig. 2, a defect identification method is provided, which is described by taking the method as an example applied to the electronic device 102 in fig. 1, and includes the following steps:
step S210, acquiring a point cloud image corresponding to the industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflecting intensity and preset point cloud characteristics.
The industrial instrument to be identified can be any industrial instrument needing to be detected in an industrial production line. The point cloud image refers to a 3D point cloud image, and may be an image captured by a 3D line scan camera or a depth camera. The feature information refers to a three-dimensional point cloud feature, and may be a feature related to a pixel point in a point cloud image, for example, the feature information may be coordinate information, reflection intensity, normal vector, color information, and the like. The initial position information refers to coordinate positions of the respective pixel points in two directions in the point cloud image, and may be, for example, x, y in (x, y, z) coordinates of the respective pixel points. The depth information refers to coordinate information perpendicular to the two directions in a certain direction of each pixel point, and generally refers to z in (x, y, z) coordinates. The preset point cloud features may be point cloud features selected according to detection requirements, for example, if a certain industrial instrument needs to perform defect identification according to the point cloud features of a normal vector, the preset point cloud features may include the normal vector.
Specifically, the electronic device acquires a point cloud image of the industrial instrument to be identified in the 3D line scan camera, and acquires feature information of each pixel point in the point cloud image, where the feature information may include initial position information, depth information, reflection intensity, and a preset point cloud feature. The preset point cloud characteristics obtained from the point cloud images corresponding to different industrial instruments can be different.
Step S220, determining a feature image corresponding to the point cloud image according to the feature information of each pixel point.
The characteristic image is a data image reconstructed from the point cloud image.
Specifically, the feature image may include depth information and preset point cloud features, for example, the point cloud image includes (x, y, z) coordinates, normal vectors, and reflection intensities, and the feature image obtained by reconstructing the point cloud image includes depth z, normal vectors, and luminous intensities.
And step S230, performing data enhancement processing on the characteristic image to obtain a processed characteristic image.
Specifically, feature enhancement may be performed on each pixel point in the feature image, for example, a selectable point cloud feature is introduced to each pixel point, so that the feature of each pixel point is enhanced, and the three-dimensional data in the point cloud image may be mapped to the two-dimensional channel to obtain a processed feature image.
Therefore, data dimension reduction can be achieved, the data volume needing to be processed is reduced, the efficiency of classifying each pixel point in the point cloud image corresponding to the industrial instrument can be improved, and the efficiency of a production line is improved.
And step S240, inputting the processed characteristic image into the trained defect identification model for processing, and outputting a defect area image contained in the processed characteristic image and a defect type corresponding to the defect area image.
The trained defect recognition model may be a model for classifying each point in the feature image. The defect area image may be an image corresponding to the feature image, which may include category information of the respective points. The defect category may be category information acquired from the defect area image.
Specifically, a defect region image corresponding to the feature image is obtained through the trained defect recognition model, and a defect type corresponding to the defect region image can be obtained according to the defect region image.
In the embodiment, the point cloud image corresponding to the industrial instrument to be identified is obtained, the characteristic image corresponding to the point cloud image is determined by using the characteristic information of each pixel point in the point cloud image, and meanwhile, data enhancement processing is performed on the characteristic image, so that the processed characteristic image can be input into a trained defect identification model, and a defect area image contained in the characteristic image and a defect category corresponding to the defect area image are obtained. The method maps the depth information, the reflection intensity and the point cloud features in the three-dimensional data to the reconstructed feature images, and performs data enhancement processing on the feature images, so that the dimension reduction of the data is realized, the data volume in the processing process is reduced, compared with the traditional three-dimensional point cloud semantic segmentation technology, the method utilizes the data after the dimension reduction to identify the defects of the industrial instrument, and can realize the improvement of the defect detection efficiency aiming at the industrial instrument.
In some embodiments, as shown in fig. 3, before inputting the processed feature image into the trained defect recognition model, and outputting a defect region image included in the processed feature image, and a defect category corresponding to the defect region image, the method further includes:
step S310, obtaining a sample point cloud image corresponding to a sample industrial instrument, a sample characteristic image corresponding to the sample point cloud image and a sample defect area image corresponding to the sample characteristic image; the sample characteristic image is obtained by carrying out data reconstruction processing and data enhancement processing on the sample point cloud image.
Wherein, the sample industrial instrument refers to an industrial instrument with known defects. The sample point cloud image refers to a 3D point cloud image, and may be an image captured by a 3D line scan camera or a depth camera. The sample feature image may be a feature image processed by the sample point cloud image through steps S220 and S230. The sample defect area image refers to an image including category information of each point, and a defect recognition model to be trained can be trained through the sample defect area image.
Specifically, a sample point cloud image corresponding to a sample industrial instrument of which the defect type is known in advance can be obtained, data reconstruction processing and data enhancement processing are performed on the sample point cloud image, a sample characteristic image corresponding to the sample point cloud image can be obtained, and a sample defect area image corresponding to the sample characteristic image is obtained at the same time. Therefore, training data can be provided for training the defect recognition model to be trained.
Step S320, inputting the sample characteristic image into a defect recognition model to be trained, and performing data enhancement processing on the sample characteristic image to obtain input network characteristics; and carrying out image fusion processing on the input network characteristics to obtain a predicted defect area image.
The input network features may be network features processed for the defect recognition model to be trained. And the data enhancement processing can obtain network characteristics required by training according with the defect recognition model to be trained. The image fusion processing refers to processing input data (i.e., input network features) when a defect recognition model is trained. Predicting the defect area image refers to the fact that in the process of training the defect identification model, the prediction result output by the defect identification model can be trained according to the prediction result and the actual result, so that the prediction result is close to the actual result under a certain condition, and the actual result is the sample defect area image.
And step S330, training the defect identification model to be trained by utilizing the predicted defect area image and the sample defect area image to obtain the trained defect identification model.
Specifically, the defect recognition model may be trained by using a predicted defect area image and a sample defect area image output by the trained defect recognition model, so that parameters included in the trained defect recognition model are continuously updated until a difference between the predicted defect area image and the sample defect area image is within an allowable range, and then the training is completed to obtain the trained defect recognition model.
In the embodiment, the sample characteristic image is input into a defect identification model to be trained by acquiring the sample characteristic image and a sample defect area image corresponding to the sample characteristic image; the defect recognition model can be obtained by training the predicted defect area image output by the defect recognition model and the sample defect area image, so that a targeted data processing model can be provided for processing the characteristic image to obtain the defect type corresponding to the defect area image of the industrial instrument to be recognized, and the detection efficiency of the industrial instrument is improved.
In some embodiments, the image fusion processing on the input network features to obtain the predicted defect area image includes:
acquiring initial characteristics of a plurality of levels corresponding to input network characteristics;
obtaining at least one fusion feature of each level based on the initial feature of each level;
and obtaining a predicted defect area image by using the fusion feature corresponding to the last level in the multiple levels.
Wherein, the initial characteristics of a plurality of levels can be obtained according to the input network characteristics; for example, the initial features of each hierarchy may be determined according to a size ratio of the input network features to the initial features of each hierarchy, where the input network features are located at a first hierarchy, that is, the initial features of the first hierarchy are the input network features. The fusion characteristics of each hierarchy are network characteristics obtained by fusing the initial characteristics of each hierarchy. It should be noted that the initial feature and the fused feature are both network features. The last level refers to a level corresponding to the first level, and if the plurality of levels is greater than two, at least one level is included between the first level and the last level; if the plurality of levels is two, the last level is the second level; it will be appreciated that if the first layer is the topmost layer, then the last level is the bottommost layer.
In this embodiment, initial features of a plurality of levels are obtained based on input network features, the initial features of the plurality of levels are subjected to fusion processing, and a predicted defect region image is obtained according to the fusion feature of the last level, so that training of a defect recognition model can be realized, and the method can be applied to data enhancement processing of a feature image after data enhancement processing, so that detection of industrial instruments can be accelerated, the detection efficiency of defective products in a production line is improved, and the production efficiency of the production line is improved.
In some embodiments, deriving at least one fused feature for each level based on the initial features for each level comprises:
acquiring a current level and current fusion characteristics corresponding to the current level;
if the current level is not the last level in the multiple levels, determining the feature order of the current fusion feature in all fusion features included in the current level;
if the characteristic sequence represents that the current fusion characteristic is the first of all fusion characteristics included in the current level, obtaining the current fusion characteristic based on the initial characteristic of the current level, the network characteristic of the previous level of the current level and the initial characteristic of the next level of the current level; the network feature of the previous level of the current level is an initial feature of the previous level of the current level or any one of fusion features with a feature sequence less than or equal to that of the current fusion feature; or,
if the characteristic sequence represents that the current fusion characteristic is not the first of all fusion characteristics included in the current level, obtaining the current fusion characteristic based on the target fusion characteristic of the current level, the network characteristic of the previous level of the current level and the target fusion characteristic of the next level of the current level; and the feature sequence of the target fusion feature of the current level and the feature sequence of the target fusion feature of the next level of the current level are both the previous one of the feature sequences of the current fusion feature.
Wherein, the current level can be any level except the first level, each level except the first level should include the initial feature and at least one fused feature, and the current fused feature can be any one fused feature of the current level. The feature order refers to an order of the fused features in the plurality of fused features of the hierarchy in which the fused features are located, and if the number of the fused features of a certain hierarchy is one, the order of the fused features is 1, wherein the feature order of each hierarchy corresponds to that of the second fused feature of the second hierarchy, for example, the feature order of the second fused feature of the second hierarchy and the feature order of the second fused feature of the third hierarchy should be both 2. The network feature may be an initial feature or a converged feature. The target fusion feature refers to a specific fusion feature from which the current fusion feature needs to be obtained. The net feature of the previous level of the current level may be an initial feature of the previous level, or may be a fused feature of any previous level whose feature order is smaller than or equal to the feature order of the current fused feature, for example, if the feature order of the current fused feature is 3 (that is, the current fused feature is the 3 rd fused feature), the net feature of the previous level of the current level may be an initial feature of the previous level, or may be the 2 nd fused feature of the previous level, or may be the 3 rd fused feature of the previous level. It is to be understood that the previous level is relative to the current level.
Specifically, any one hierarchy is obtained as a current hierarchy, and any one fusion feature in the current hierarchy is obtained as a current fusion feature.
And if the current fusion feature is the first fusion feature, obtaining the current fusion feature based on the initial feature of the current level, the network feature of the previous level of the current level and the initial feature of the next level of the current level.
If the current fusion feature is not the first fusion feature, determining the previous network feature of the current fusion feature as the target fusion feature of the current level; determining a next-level network feature corresponding to a previous network feature of the current fusion feature as a next-level target fusion feature; for example, if the feature order of the current fused feature is 3 (i.e., the current fused feature is the 3 rd fused feature), the target fused feature of the current hierarchy is the 2 nd fused feature of the current hierarchy, and the target fused feature of the next hierarchy is the 2 nd fused feature of the next hierarchy. And obtaining the current fusion feature based on the target fusion feature of the current level, the network feature of the previous level of the current level, and the target fusion feature of the next level of the current level.
For example, if the current fused feature is the first fused feature, the current fused feature is obtained based on the initial feature of the previous level (or the first fused feature of the previous level), the initial feature of the current level, and the initial feature of the next level. And if the current fused feature is not the first fused feature, obtaining the current fused feature based on the previous fused feature of the current fused feature, the fused feature with the feature sequence same as that of the current fused feature in the previous level and the fused feature with the feature sequence previous to that of the current fused feature in the next level.
In this embodiment, the fusion feature of the current level is obtained based on the network feature of the current level, the network feature of the previous level, and the network feature of the next level, so that data enhancement processing can be performed on the feature image after data enhancement processing, training of the defect identification model can be realized, the processing efficiency of the processed feature image can be improved, and the detection efficiency of the industrial instrument can be improved.
In some embodiments, after obtaining the current hierarchy and the current fusion feature corresponding to the current hierarchy, the method further includes:
if the current level is the last level in the multiple levels, determining the feature order of the current fusion feature in the multiple fusion features included in the last level;
if the feature sequence represents that the current fusion feature is the first of the multiple fusion features included in the last hierarchy, the current fusion feature is obtained based on the initial feature of the last hierarchy and the network feature of the last hierarchy; the network feature of the last level is an initial feature of the last level or a fusion feature of which the feature order is less than or equal to any one of the feature orders of the current fusion features; or,
if the feature sequence representation current fusion feature is not the first of the multiple fusion features included in the last hierarchy, obtaining the current fusion feature based on the target fusion feature of the last hierarchy and the network feature of the last hierarchy; wherein the feature order of the target fusion feature of the last level is one before the feature order of the current fusion feature.
For example, if the feature order of the current fused feature is 3 (i.e., the current fused feature is the 3 rd fused feature), the target fused feature of the last level is the 2 nd fused feature of the last level.
In this embodiment, if the current level is the last level, the fusion feature of the last level may be obtained by fusion based on the network feature of the last level and the network feature of the previous level, so that the present application performs fusion on the network feature of each level to obtain the last fusion feature of the last level, so that a predicted defect region image may be obtained according to the fusion feature, training of a defect recognition model may be implemented, the processing efficiency of a processed feature image is improved, and the detection efficiency of an industrial instrument is improved.
In some specific embodiments, as shown in fig. 4, a plurality of network features, which are a11, a21-a22, a31-a33, a41-a44, and a51-a55, initial features of a plurality of levels corresponding to the input network features are a11, a21, a31, a41, and a51, respectively, if the current fused feature is a22, feature extraction and fusion can be performed on a11, a21, and a31 based on a11, a21, and a31 to obtain a 22; if the current fusion feature is A43, feature extraction and fusion can be performed based on A42, A33 (or A32, or A31) and A52 to obtain A43; if the current fused feature is A53, feature extraction and fusion can be performed based on A52 and A43 (or A42 or A41), resulting in A53. The feature size of each level can be preset, for example, the ratio of the first level to the second level can be 2, and the ratio of the first level to the third level can be 4; the size ratio of two adjacent levels may be the same.
In this embodiment, the fusion features are obtained through fusion processing of the network features, so that training of the defect recognition model can be realized, and further, the processing efficiency of the processed feature image can be improved, thereby improving the detection efficiency of the industrial instrument.
In some embodiments, obtaining the predicted defect region image by using the fused feature corresponding to the last level in the plurality of levels includes:
taking the last fusion feature in the plurality of fusion features included in the last hierarchy as a final fusion feature;
and determining and predicting the image of the defect area according to the final fusion characteristics.
Specifically, if the size of each level is proportional, for example, the size of the first level is 16 times the size of the last level, the final fused feature may be processed to obtain a predicted defective region image having the same size as the first level.
In this embodiment, the last fusion feature in the last level is the network feature with the highest fusion degree, and the predicted defect region image is obtained according to the network feature with the highest fusion degree, so that training of a defect recognition model can be realized, the accuracy of predicting the defect region image can be improved, and the detection efficiency of an industrial instrument can be improved.
In some embodiments, training a defect recognition model to be trained by using a predicted defect region image and a sample defect region image to obtain a trained defect recognition model, includes:
obtaining a prediction defect area image corresponding to each hierarchy by using the fusion characteristics of each hierarchy;
determining a loss value corresponding to each hierarchy according to the loss function of each hierarchy, the predicted defect area image corresponding to each hierarchy and the sample defect area image;
and training the defect recognition model to be trained based on the loss values corresponding to the levels to obtain the trained defect recognition model.
Specifically, the predicted defective region image corresponding to each hierarchy can be obtained through the last fusion feature of each hierarchy. And training the defect recognition model to be trained based on the loss values corresponding to the levels to obtain the trained defect recognition model. Therefore, the fusion characteristics of each hierarchy can be restrained, and the network characteristics of each hierarchy are prevented from being excessively fused.
In the embodiment, the predicted defect area image corresponding to each level is obtained through the fusion characteristics of each level, and the defect identification model to be trained is trained through the predicted defect area image of each level, so that the detection accuracy of the industrial instrument can be improved.
In some embodiments, the initial features of each level include sub-features corresponding to a plurality of network channels;
as shown in fig. 5, obtaining at least one fused feature of each level based on the initial features of each level includes:
step S510, acquiring a weight coefficient corresponding to each network channel;
step S520, channel enhancement processing is carried out on the sub-features corresponding to the network channels based on the weight coefficients, and processed sub-features corresponding to the network channels are obtained;
step S530, obtaining processed initial characteristics by using the processed sub-characteristics corresponding to each network channel;
and step S540, obtaining at least one fusion feature of each hierarchy by using the processed initial features of each hierarchy.
The initial features comprise sub-features corresponding to a plurality of network channels. The weighting coefficients are used to characterize the importance of the individual network channels.
Specifically, a weight coefficient for representing the importance degree of each network channel is obtained; based on the weight coefficient, channel enhancement processing can be performed on the sub-features corresponding to each network channel to obtain the processed sub-features corresponding to each network channel. Therefore, when the defect identification model classifies points in the characteristic image, sub-characteristics in each network channel can be effectively distinguished, and the detection accuracy of the industrial instrument can be improved. Obtaining processed initial characteristics by utilizing the processed sub-characteristics corresponding to each network channel; at least one fused feature for each level may be derived using the processed initial features for each level. Further, enhancement processing may be performed for each feature to be fused using the above method.
In this embodiment, the processed fusion feature is obtained by performing channel enhancement processing on the sub-features corresponding to the plurality of channels in the initial feature and performing fusion processing according to the processed initial feature. Thereby improving the accuracy of the industrial instrument detection.
In some embodiments, determining a feature image corresponding to the point cloud image according to the feature information of each pixel point includes:
converting the initial position information of each pixel point to obtain conversion position information corresponding to each pixel point;
and determining a characteristic image corresponding to the point cloud image according to the depth information, the reflection intensity and the preset point cloud characteristics of each pixel point and the conversion position information corresponding to each pixel point.
The conversion position information refers to position information obtained by converting the initial position information.
Specifically, the initial position information of each pixel point may be converted to obtain conversion position information corresponding to each converted pixel point. For example, the coordinates x and y may be converted to obtain converted x1 and y1, and x1 and y1 may be used as the conversion position information of the corresponding pixel. Based on the conversion position information corresponding to each pixel point, the depth information, the reflection intensity and the preset point cloud characteristics of the pixel point are bound with the conversion position information to form a reconstructed characteristic image, and the characteristic image can comprise the depth information, the reflection intensity and the preset point cloud characteristics. Therefore, after the initial position information of each pixel point is converted, the point cloud image of the three-dimensional data can be reconstructed based on the converted conversion position, and the reconstructed feature image is obtained.
In this embodiment, the initial position information of each pixel point is converted to obtain the coordinates of the reconstructed feature image, and the depth information, the reflection intensity, the preset point cloud feature of each pixel point and the coordinates of the reconstructed feature image are bound, so that the defect detection efficiency for the industrial instrument is improved.
In some embodiments, the initial location information includes first location information and second location information.
As shown in fig. 6, converting the initial position information of each pixel point to obtain the conversion position information corresponding to each pixel point includes:
step S610, acquiring a first physical precision and a second physical precision of the camera;
step S620, converting the first position information according to the first physical precision to obtain first conversion position information;
step S630, converting the second position information according to the second physical precision to obtain second conversion position information;
in step S640, conversion position information is obtained according to the first conversion position information and the second conversion position information.
The first physical precision refers to the scanning precision among all the columns in the process of acquiring the point cloud image by the camera, and the second physical precision refers to the scanning precision among all the rows in the process of acquiring the point cloud image by the camera; the first position information may be a position (x) of a column where the pixel is located, and the second position information may be a position (y) of a row where the pixel is located.
For example, the first physical precision may be x r The second physical precision may be y r The conversion is performed according to the following expression:
Figure 144220DEST_PATH_IMAGE002
therein, index i Index for the first conversion position information j X and y are the first position information and the second position information, respectively.
In this embodiment, the point cloud image can be reconstructed by converting the position information, so that the detection efficiency of the industrial instrument can be improved.
In some embodiments, the method further includes labeling the sample feature image, inputting the labeled sample feature image into a defect recognition model to be trained, processing the depth information and the point cloud features of the sample feature image through the defect recognition model to be trained to obtain input network features, and performing fusion processing on the input network features to obtain a predicted defect area image; and training the defect identification model to be trained by utilizing the predicted defect area image and the sample defect area image to obtain a defect identification model trained in advance.
In the embodiment, the accuracy of the defect recognition model training can be improved by labeling the sample characteristic image.
In some embodiments, as shown in fig. 7, the input sample feature image is extracted, the feature extracted sample feature image is subjected to multiple data enhancement methods such as random clipping and random scaling to expand the dimensionality of the training sample, and the data are normalized to the same distribution through the data. It is noted that in this application, the information of the channels is not usually distributed in the same way, so that each channel of the input needs to be normalized separately. Data were normalized as follows:
by the formula
Figure 940138DEST_PATH_IMAGE004
Converting the input sample characteristic image, where Fi (x) is the original data of the ith channel number, mean i Is the mean of the total training samples for the ith channel feature, variance is the variance of the total training samples for the ith channel feature, and fi (x) is the feature of the ith channel of the input model after conversion.
Inputting the processed sample characteristic image into a defect identification model to be trained, wherein the defect identification model can output a predicted defect region image, and according to the loss function of each level, the predicted defect region image and the sample defect region image corresponding to each level are utilized,
in the embodiment, the dimensionality of the training sample is expanded by carrying out various data enhancement methods such as random cutting and random scaling on the sample characteristic image, so that the accuracy of defect recognition model training can be improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially shown as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts according to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a defect identification device. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so the specific limitations in the embodiment of the defect identification device provided below can be referred to the limitations on the defect identification method in the above, and are not described again here.
In some embodiments, as shown in fig. 8, there is provided a defect identifying apparatus including:
the acquisition module 810 is used for acquiring a point cloud image corresponding to the industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflecting intensity and preset point cloud characteristics.
And a determining module 820, configured to determine a feature image corresponding to the point cloud image according to the feature information of each pixel point.
The processing module 830 is configured to perform data enhancement processing on the feature image to obtain a processed feature image.
The recognition module 840 is configured to input the processed feature image into the trained defect recognition model for processing, and output a defect region image included in the processed feature image and a defect category corresponding to the defect region image.
In some embodiments, the apparatus further comprises:
the sample acquisition module is used for acquiring a sample point cloud image corresponding to a sample industrial instrument, a sample characteristic image corresponding to the sample point cloud image and a sample defect area image corresponding to the sample characteristic image; the sample characteristic image is obtained by carrying out data reconstruction processing and data enhancement processing on the sample point cloud image;
the predicted image acquisition module is used for inputting the sample characteristic image into a defect identification model to be trained and performing data enhancement processing on the sample characteristic image to obtain input network characteristics; carrying out image fusion processing on the input network characteristics to obtain a predicted defect area image;
and the model training module is used for training the defect identification model to be trained by utilizing the predicted defect area image and the sample defect area image to obtain the trained defect identification model.
In some embodiments, the predicted image acquisition module comprises an initial feature acquisition unit, a fused feature acquisition unit, and a predicted defective region image unit, wherein:
an initial feature obtaining unit, configured to obtain initial features of multiple hierarchies corresponding to an input network feature;
the fusion feature acquisition unit is used for acquiring at least one fusion feature of each hierarchy based on the initial feature of each hierarchy;
and the predicted defect area image unit is used for obtaining a predicted defect area image by utilizing the fusion feature corresponding to the last hierarchy in the plurality of hierarchies.
In some embodiments, the fused feature acquisition unit comprises a current hierarchical unit, a feature order unit, and a fused feature unit, wherein:
the current level unit is used for acquiring a current level and current fusion characteristics corresponding to the current level;
a feature order unit, configured to determine, if the current level is not the last level of the multiple levels, a feature order of the current fused feature in all fused features included in the current level;
the fusion feature unit is used for obtaining the current fusion feature based on the initial feature of the current level, the network feature of the previous level of the current level and the initial feature of the next level of the current level if the feature sequence represents that the current fusion feature is the first of all fusion features included in the current level; the network feature of the previous level of the current level is an initial feature of the previous level of the current level or any one of fusion features with a feature sequence less than or equal to that of the current fusion feature; or if the feature sequence represents that the current fusion feature is not the first of all fusion features included in the current level, obtaining the current fusion feature based on the target fusion feature of the current level, the network feature of the previous level of the current level and the target fusion feature of the next level of the current level; and the feature sequence of the target fusion feature of the current level and the feature sequence of the target fusion feature of the next level of the current level are both the previous one of the feature sequences of the current fusion feature.
In some embodiments, the fused feature acquisition cell block further comprises a last level cell, wherein:
a last level unit, configured to determine, if the current level is a last level of the multiple levels, a feature order of the current fused feature in the multiple fused features included in the last level;
the fusion feature unit is further configured to obtain a current fusion feature based on an initial feature of a last level and a network feature of a previous level of the last level if the feature order represents that the current fusion feature is a first one of the multiple fusion features included in the last level; the network feature of the last level is an initial feature of the last level or any one of the fusion features, wherein the feature order of the network feature of the last level is smaller than or equal to the feature order of the current fusion feature; or if the characteristic sequence represents that the current fusion characteristic is not the first of the multiple fusion characteristics of the last level, obtaining the current fusion characteristic based on the target fusion characteristic of the last level and the network characteristic of the last level; wherein the feature order of the target fusion feature of the last level is one before the feature order of the current fusion feature.
In some embodiments, the predicted defective region image unit includes a target fusion feature unit and an image acquisition unit, wherein:
a target fused feature unit configured to take a last fused feature of the plurality of fused features included in the last hierarchy as a final fused feature;
and the image acquisition unit is used for obtaining a predicted defect area image according to the final fusion characteristics.
In some embodiments, the model training module comprises a hierarchical image unit, a loss value unit, and a model training unit, wherein:
the hierarchy image unit is used for obtaining a prediction defect area image corresponding to each hierarchy by using the fusion characteristics of each hierarchy;
the loss value unit is used for determining the loss value corresponding to each hierarchy according to the loss function of each hierarchy, the prediction defect area image corresponding to each hierarchy and the sample defect area image;
and the model training unit is used for training the defect identification model to be trained based on the loss values corresponding to the levels to obtain the trained defect identification model.
In some embodiments, the initial features of each level include sub-features corresponding to a plurality of network channels; the fusion feature obtaining unit further comprises a weight coefficient obtaining unit, an enhancement processing unit, an enhancement feature obtaining unit and an enhancement fusion unit, wherein:
the weight coefficient acquisition unit is used for acquiring the weight coefficient corresponding to each network channel;
the enhancement processing unit is used for carrying out channel enhancement processing on the sub-features corresponding to the network channels based on the weight coefficients to obtain the processed sub-features corresponding to the network channels;
the enhanced feature acquisition unit is used for acquiring processed initial features by utilizing the processed sub-features corresponding to the network channels;
and the enhanced fusion unit is used for obtaining at least one fusion feature of each hierarchy by using the processed initial features of each hierarchy.
In some embodiments, the determination module comprises a position information conversion unit and an image determination unit, wherein:
the position information conversion unit is used for converting the initial position information of each pixel point to obtain the conversion position information corresponding to each pixel point;
and the image determining unit is used for determining a characteristic image corresponding to the point cloud image according to the depth information, the reflection intensity and the preset point cloud characteristics of each pixel point and the conversion position information corresponding to each pixel point.
The modules in the defect identifying device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, an electronic device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The electronic device comprises a processor, a memory, an Input/Output (I/O) interface, a communication interface, a display unit and an Input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the electronic device is used for exchanging information between the processor and an external device. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement the steps of the defect identification method described above. The display unit of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration relevant to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
In some embodiments, there is further provided an electronic device comprising a memory and a processor, the memory having a computer program stored therein, the processor when executing the computer program being configured to implement the steps in the defect identification method embodiments described above.
In some embodiments, as shown in fig. 10, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the above-mentioned defect identification method embodiments.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of the above-described defect identification method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A method of defect identification, comprising:
acquiring a point cloud image corresponding to an industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflecting intensity and preset point cloud characteristics;
determining a characteristic image corresponding to the point cloud image according to the characteristic information of each pixel point;
performing data enhancement processing on the characteristic image to obtain a processed characteristic image;
inputting the processed characteristic image into a trained defect recognition model for processing, and outputting a defect area image contained in the processed characteristic image and a defect type corresponding to the defect area image.
2. The method according to claim 1, wherein before inputting the processed feature image into a trained defect recognition model and outputting a defect region image included in the processed feature image and a defect type corresponding to the defect region image, the method further comprises:
acquiring a sample point cloud image corresponding to a sample industrial instrument, a sample characteristic image corresponding to the sample point cloud image and a sample defect area image corresponding to the sample characteristic image; the sample characteristic image is obtained by carrying out data reconstruction processing and data enhancement processing on the sample point cloud image;
inputting the sample characteristic image into a defect recognition model to be trained, and performing data enhancement processing on the sample characteristic image to obtain input network characteristics; carrying out image fusion processing on the input network characteristics to obtain a predicted defect area image;
and training the defect identification model to be trained by using the predicted defect area image and the sample defect area image to obtain the trained defect identification model.
3. The method according to claim 2, wherein the performing image fusion processing on the input network features to obtain a predicted defect region image comprises:
acquiring initial characteristics of a plurality of hierarchies corresponding to the input network characteristics;
obtaining at least one fused feature of each hierarchy based on the initial feature of each hierarchy;
and obtaining a predicted defect area image by using the fusion feature corresponding to the last level in the multiple levels.
4. The method according to claim 3, wherein the deriving at least one fused feature for each level based on the initial features of each level comprises:
acquiring a current level and current fusion characteristics corresponding to the current level;
determining a feature order of the current fused feature among all fused features included in the current hierarchy if the current hierarchy is not a last hierarchy in the plurality of hierarchies;
if the feature order represents that the current fusion feature is the first of all fusion features included in the current level, obtaining the current fusion feature based on the initial feature of the current level, the network feature of the previous level of the current level and the initial feature of the next level of the current level; wherein, the network feature of the previous level of the current level is an initial feature of the previous level of the current level or any one of the fusion features with a feature order less than or equal to the feature order of the current fusion feature; or,
if the feature order represents that the current fusion feature is not the first of all fusion features included in the current level, obtaining the current fusion feature based on a target fusion feature of the current level, a network feature of a level above the current level, and a target fusion feature of a level below the current level; wherein the feature order of the target fusion feature of the current hierarchy and the target fusion feature of the next hierarchy of the current hierarchy is previous to the feature order of the current fusion feature.
5. The method according to claim 4, wherein after obtaining the current hierarchy and the current fused feature corresponding to the current hierarchy, further comprising:
determining a feature order of the current fused feature in a plurality of fused features included in the last level if the current level is the last level in the plurality of levels;
if the feature order represents that the current fused feature is the first fused feature in the plurality of fused features included in the last hierarchy, obtaining the current fused feature based on the initial feature of the last hierarchy and the network feature of the last hierarchy; wherein the network feature of the last level is an initial feature of the last level or any one of the fusion features with a feature order less than or equal to the feature order of the current fusion feature; or,
if the feature order represents that the current fusion feature is not the first fusion feature in the plurality of fusion features included in the last hierarchy, obtaining the current fusion feature based on a target fusion feature of the last hierarchy and a network feature of a previous hierarchy of the last hierarchy; wherein the feature order of the target fused feature of the last level is one before the feature order of the current fused feature.
6. The method according to claim 5, wherein obtaining the predicted defect region image by using the fused feature corresponding to the last one of the plurality of levels comprises:
taking the last fused feature in the plurality of fused features included in the last hierarchy as a final fused feature;
and determining and predicting the image of the defect area according to the final fusion characteristics.
7. The method according to any one of claims 3 to 6, wherein the training the defect recognition model to be trained by using the predicted defect region image and the sample defect region image to obtain the trained defect recognition model comprises:
obtaining a predicted defect area image corresponding to each hierarchy by using the fusion characteristics of each hierarchy;
determining a loss value corresponding to each level according to the loss function of each level, the predicted defect area image corresponding to each level and the sample defect area image;
and training the defect recognition model to be trained based on the loss values corresponding to the levels to obtain the trained defect recognition model.
8. The method of claim 3, wherein the initial features of each level comprise sub-features corresponding to a plurality of network channels;
the obtaining of the at least one fused feature of each level based on the initial features of each level comprises:
acquiring a weight coefficient corresponding to each network channel;
based on the weight coefficient, performing channel enhancement processing on the sub-features corresponding to each network channel to obtain processed sub-features corresponding to each network channel;
obtaining processed initial characteristics by using the processed sub-characteristics corresponding to each network channel;
and obtaining at least one fused feature of each hierarchy by using the processed initial features of each hierarchy.
9. The method according to any one of claims 2 to 6, wherein the determining a feature image corresponding to the point cloud image according to the feature information of each pixel point comprises:
converting the initial position information of each pixel point to obtain conversion position information corresponding to each pixel point;
and determining a characteristic image corresponding to the point cloud image according to the depth information, the reflection intensity and the preset point cloud characteristics of each pixel point and the conversion position information corresponding to each pixel point.
10. A defect identifying apparatus, comprising:
the acquisition module is used for acquiring a point cloud image corresponding to the industrial instrument to be identified and characteristic information of each pixel point in the point cloud image; the characteristic information comprises initial position information, depth information, reflecting intensity and preset point cloud characteristics;
the determining module is used for determining a characteristic image corresponding to the point cloud image according to the characteristic information of each pixel point;
the processing module is used for carrying out data enhancement processing on the characteristic image to obtain a processed characteristic image;
and the recognition module is used for inputting the processed characteristic image into a trained defect recognition model for processing, and outputting a defect area image contained in the processed characteristic image and a defect type corresponding to the defect area image.
11. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN202210812741.4A 2022-07-12 2022-07-12 Defect identification method and device, electronic equipment and computer readable storage medium Active CN114898357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210812741.4A CN114898357B (en) 2022-07-12 2022-07-12 Defect identification method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210812741.4A CN114898357B (en) 2022-07-12 2022-07-12 Defect identification method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114898357A true CN114898357A (en) 2022-08-12
CN114898357B CN114898357B (en) 2022-10-18

Family

ID=82730131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210812741.4A Active CN114898357B (en) 2022-07-12 2022-07-12 Defect identification method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114898357B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965856A (en) * 2023-02-23 2023-04-14 深圳思谋信息科技有限公司 Image detection model construction method and device, computer equipment and storage medium
CN117468083A (en) * 2023-12-27 2024-01-30 浙江晶盛机电股份有限公司 Control method and device for seed crystal lowering process, crystal growth furnace system and computer equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523501A (en) * 2018-04-28 2019-03-26 江苏理工学院 One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data
CN109932394A (en) * 2019-03-15 2019-06-25 山东省科学院海洋仪器仪表研究所 The infrared thermal wave binocular stereo imaging detection system and method for turbo blade defect
CN112701060A (en) * 2021-03-24 2021-04-23 惠州高视科技有限公司 Method and device for detecting bonding wire of semiconductor chip
CN113554643A (en) * 2021-08-13 2021-10-26 上海高德威智能交通***有限公司 Target detection method and device, electronic equipment and storage medium
CN113885558A (en) * 2021-09-27 2022-01-04 湖南德森九创科技有限公司 Dam surface image unmanned aerial vehicle automatic safety acquisition method and system
CN114359228A (en) * 2022-01-06 2022-04-15 深圳思谋信息科技有限公司 Object surface defect detection method and device, computer equipment and storage medium
CN114692720A (en) * 2022-02-25 2022-07-01 广州文远知行科技有限公司 Image classification method, device, equipment and storage medium based on aerial view
CN114708391A (en) * 2022-06-06 2022-07-05 深圳思谋信息科技有限公司 Three-dimensional modeling method, three-dimensional modeling device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523501A (en) * 2018-04-28 2019-03-26 江苏理工学院 One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data
CN109932394A (en) * 2019-03-15 2019-06-25 山东省科学院海洋仪器仪表研究所 The infrared thermal wave binocular stereo imaging detection system and method for turbo blade defect
CN112701060A (en) * 2021-03-24 2021-04-23 惠州高视科技有限公司 Method and device for detecting bonding wire of semiconductor chip
CN113554643A (en) * 2021-08-13 2021-10-26 上海高德威智能交通***有限公司 Target detection method and device, electronic equipment and storage medium
CN113885558A (en) * 2021-09-27 2022-01-04 湖南德森九创科技有限公司 Dam surface image unmanned aerial vehicle automatic safety acquisition method and system
CN114359228A (en) * 2022-01-06 2022-04-15 深圳思谋信息科技有限公司 Object surface defect detection method and device, computer equipment and storage medium
CN114692720A (en) * 2022-02-25 2022-07-01 广州文远知行科技有限公司 Image classification method, device, equipment and storage medium based on aerial view
CN114708391A (en) * 2022-06-06 2022-07-05 深圳思谋信息科技有限公司 Three-dimensional modeling method, three-dimensional modeling device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGLIANG ZHOU等: "A Novel Approach to Automated 3D Spalling Defects Inspection in Railway Tunnel Linings Using Laser Intensity and Depth Information", 《SENSORS》 *
***: "基于线激光扫描的零件三维表面缺陷检测", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965856A (en) * 2023-02-23 2023-04-14 深圳思谋信息科技有限公司 Image detection model construction method and device, computer equipment and storage medium
CN115965856B (en) * 2023-02-23 2023-05-30 深圳思谋信息科技有限公司 Image detection model construction method, device, computer equipment and storage medium
CN117468083A (en) * 2023-12-27 2024-01-30 浙江晶盛机电股份有限公司 Control method and device for seed crystal lowering process, crystal growth furnace system and computer equipment
CN117468083B (en) * 2023-12-27 2024-05-28 浙江晶盛机电股份有限公司 Control method and device for seed crystal lowering process, crystal growth furnace system and computer equipment

Also Published As

Publication number Publication date
CN114898357B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN111523414B (en) Face recognition method, device, computer equipment and storage medium
CN114898357B (en) Defect identification method and device, electronic equipment and computer readable storage medium
US20190108411A1 (en) Image processing method and processing device
CN111814794B (en) Text detection method and device, electronic equipment and storage medium
CN109886330B (en) Text detection method and device, computer readable storage medium and computer equipment
CN111931931A (en) Deep neural network training method and device for pathology full-field image
AU2018202767A1 (en) Data structure and algorithm for tag less search and svg retrieval
CN112581477A (en) Image processing method, image matching method, device and storage medium
CN112241952A (en) Method and device for recognizing brain central line, computer equipment and storage medium
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN111259903A (en) Identification table counting method and device, readable storage medium and computer equipment
CN115527036A (en) Power grid scene point cloud semantic segmentation method and device, computer equipment and medium
CN114387289B (en) Semantic segmentation method and device for three-dimensional point cloud of power transmission and distribution overhead line
CN113516194B (en) Semi-supervised classification method, device, equipment and storage medium for hyperspectral remote sensing images
CN110930386A (en) Image processing method, device, equipment and storage medium
CN114549849A (en) Image recognition method and device, computer equipment and storage medium
CN115601283B (en) Image enhancement method and device, computer equipment and computer readable storage medium
CN116051959A (en) Target detection method and device
CN115758271A (en) Data processing method, data processing device, computer equipment and storage medium
CN114925153A (en) Service-based geographic information data quality detection method, device and equipment
CN115862013B (en) Training method for power transmission and distribution electric field scenic spot cloud semantic segmentation model based on attention mechanism
CN113886629B (en) Course picture retrieval model establishing method
CN118053161A (en) Card surface information identification method, apparatus, device, storage medium, and program product
CN114119961A (en) Object detection method, device, apparatus, storage medium and program product
CN118277271A (en) Abnormality locating method, abnormality locating device, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared