CN115358981A - Glue defect determining method, device, equipment and storage medium - Google Patents

Glue defect determining method, device, equipment and storage medium Download PDF

Info

Publication number
CN115358981A
CN115358981A CN202210980120.7A CN202210980120A CN115358981A CN 115358981 A CN115358981 A CN 115358981A CN 202210980120 A CN202210980120 A CN 202210980120A CN 115358981 A CN115358981 A CN 115358981A
Authority
CN
China
Prior art keywords
glue
point
image
sample
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210980120.7A
Other languages
Chinese (zh)
Inventor
王昌安
王亚彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210980120.7A priority Critical patent/CN115358981A/en
Publication of CN115358981A publication Critical patent/CN115358981A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for determining glue defects, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring a first point image of the sample, wherein the first point image shows a part of components in the sample and glue among the parts of components, and is an image obtained by shooting the first point of the sample; determining a glue area where glue is located in the first bit image according to feature information of the first bit image extracted by the semantic segmentation model, wherein the feature information is deep learning information extracted by the semantic segmentation model; and determining the glue defects of the sample at the first point according to the distribution condition of the glue area. The glue area determined by the characteristic information has higher precision and is closer to the actual situation. Therefore, the determined glue defects are more accurate according to the distribution condition of the glue area which is more real.

Description

Glue defect determining method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a storage medium for determining glue defects.
Background
With the development of artificial intelligence technology, cameras gradually become an important role in the field of artificial intelligence. Generally speaking, a plurality of different sub-divided original parts exist in the camera module, and the plurality of sub-divided original parts are connected through glue. Therefore, whether glue between the subdivision original paper exists the defect is the key of judging a camera module whether the yields. Generally, a glue position of a camera module is photographed, and an area where the glue is located is determined according to a photographed image.
In the related technology, a binaryzation mode is adopted to determine a glue area of an image, namely, the value of a pixel point corresponding to a glue part and the value of a pixel point corresponding to other parts are set to be different values, so that the glue area is obtained. And judging the defects according to the determined glue area.
However, in the above-mentioned related art, since there is a problem of the photographing angle and there may be light reflection of the glue itself, the determined defect accuracy of the glue is not high, and there may be a case where a difference from the actual case is large.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for determining glue defects. The technical scheme is as follows:
according to an aspect of an embodiment of the present application, there is provided a method for determining a glue defect, the method including:
acquiring a first point image of a sample, wherein a part of components in the sample and glue among the part of components are displayed in the first point image, and the first point image is an image obtained by shooting a first point of the sample;
determining a glue area where the glue is located in the first locus image according to feature information of the first locus image extracted by a semantic segmentation model, wherein the feature information is deep learning information extracted by the semantic segmentation model;
and determining the glue defects of the sample at the first point according to the distribution condition of the glue area.
According to an aspect of an embodiment of the present application, there is provided a glue defect determining apparatus, including:
the system comprises an image acquisition module, a data acquisition module and a data processing module, wherein the image acquisition module is used for acquiring a first point image of a sample, the first point image is displayed with partial components in the sample and glue among the partial components, and the first point image is an image obtained by shooting a first point of the sample;
the region determining module is used for determining a glue region where the glue is located in the first bit image according to the feature information of the first bit image extracted by the semantic segmentation model, wherein the feature information is deep learning information extracted by the semantic segmentation model;
and the defect determining module is used for determining the glue defects of the sample at the first point according to the distribution condition of the glue area.
According to an aspect of embodiments of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein a computer program, the computer program being loaded and executed by the processor to implement the above-mentioned method.
According to an aspect of the embodiments of the present application, there is provided a computer-readable storage medium having a computer program stored therein, the computer program being loaded and executed by a processor to implement the above method.
According to an aspect of an embodiment of the present application, there is provided a computer program product including a computer program stored in a computer-readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the above-described method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
firstly, acquiring an image (also called a first dot image) of a sample at a dot position (also called a first dot position) needing to detect a glue defect, and then acquiring characteristic information of the image through a semantic segmentation model, so that a glue area of the first dot position image is determined. After the glue area is determined, the glue defects of the sample at the first point are determined according to the distribution condition of the glue area. According to the technical scheme, the deep learning information of the first point image is extracted by using the semantic segmentation model, the glue area is determined by means of the deep learning information, the determination of the glue area is not limited by glue reflection, the precision of the glue area determined by means of the characteristic information is higher, and the glue area is closer to the actual situation. Therefore, the determined glue defects are more accurate according to the distribution condition of the glue area which is more real.
Drawings
FIG. 1 is a schematic illustration of an environment for implementing an embodiment provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a method for determining defects in glue according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for determining sample defects provided by one embodiment of the present application;
FIG. 4 is a flow chart of a method for determining glue defects according to an embodiment of the present application;
FIG. 5 is a schematic view of a glue area provided by one embodiment of the present application;
FIG. 6 is a flow chart of a method for determining glue defects according to another embodiment of the present application;
FIG. 7 is a diagram of a semantic segmentation model provided by one embodiment of the present application;
FIG. 8 is a schematic diagram of a foreground region provided by one embodiment of the present application;
FIG. 9 is a flow chart of a method for determining glue defects according to another embodiment of the present application;
FIG. 10 is a schematic diagram of fitted edge points provided by one embodiment of the present application;
FIG. 11 is a block diagram of a glue defect determination apparatus according to an embodiment of the present application;
FIG. 12 is a block diagram of a glue defect determination apparatus according to another embodiment of the present application;
fig. 13 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
Before the technical solutions of the present application are introduced, some background knowledge related to the present application will be described. The following related arts as alternatives can be arbitrarily combined with the technical solutions of the embodiments of the present application, and all of them belong to the scope of the embodiments of the present application. The embodiment of the present application includes at least part of the following contents.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) is a science for researching how to make a machine "see", and more specifically, it refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes technologies such as image processing, image recognition, image semantic understanding, image retrieval, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, virtual reality, augmented reality, synchronous positioning, map construction and the like, and also includes common biometric technologies such as face recognition, fingerprint recognition and the like.
Machine Learning (ML for short) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
With the research and development of artificial intelligence technology, the artificial intelligence technology is developed and researched in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical services, smart customer service and the like.
The scheme provided by the embodiment of the application relates to the technologies of artificial intelligence, such as computer vision and machine learning, and is specifically explained by the following embodiment.
Before the technical solutions of the present application are introduced, some terms related to the present application will be explained. The following explanations are provided as alternatives and can be combined with the technical solutions of the embodiments of the present application at will, and all of them fall within the scope of the embodiments of the present application. Embodiments of the present application include at least some of the following.
Deep learning: the concept of deep learning stems from the study of artificial neural networks. A multi-layer perceptron with multiple hidden layers is a deep learning structure. Deep learning forms a more abstract class or feature of high-level representation properties by combining low-level features to discover a distributed feature representation of the data.
Convolutional Neural Networks (CNN): the method is a type of feed Forward Neural Networks (FNNs) containing convolution calculation and having a deep structure, and is one of representative algorithms of deep learning (deep learning). The convolutional neural network has the characteristic learning (representation learning) capability, and can carry out translation invariant classification on input information according to the hierarchical structure of the convolutional neural network. In general, the basic structure of CNN includes two layers, one of which is a feature extraction layer, and the input of each neuron is connected to a local acceptance domain of the previous layer and extracts the feature of the local. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal. The CNN is composed of an input layer, a convolution layer, an activation function, a pooling layer and a full-connection layer.
Neck glue: the camera module is formed by the equipment of a plurality of different segmentation original papers, connects through glue between the different segmentation original papers, when glue was when the neck part of camera module, glue this moment is called the neck and glues.
VGG (Visual Geometry Group) 16 network: the 16 in the VGG16 network represents the 16 convolutional layers and the full-link layers contained, which contain 1.38 hundred million parameters in total. The VGG16 network is one of the neural networks, but at the same time the VGG16 network simplifies the neural network, and the VGG16 network uses a uniform convolution kernel size: 3*3.
Mask image: and (4) the image after the masking operation. The mask operation of the image refers to recalculating the value of each pixel in the image through a mask kernel, describing the influence degree of a field pixel point on a new pixel value by the mask kernel, and simultaneously carrying out weighted average on the pixel point according to a weight factor in the mask kernel. Image masking operations are commonly used in areas of image smoothing, edge detection, feature analysis, and the like.
Refer to fig. 1, which illustrates a schematic diagram of an implementation environment of an embodiment provided by an embodiment of the present application. The embodiment implementation environment may include: a terminal device 10 and a server 20.
The terminal device 10 includes, but is not limited to, a mobile phone, a tablet Computer, a smart voice interaction device, a game console, a wearable device, a multimedia player, a PC (Personal Computer), a vehicle-mounted terminal, and an electronic device such as a smart home appliance. A client of the target application may be installed in the terminal device 10.
In the embodiment of the present application, the target application may be any application capable of providing defect detection. Typically, the application is a glue defect type application. Of course, in addition to the glue defect application, the glue defect determination service may be provided in other types of applications. For example, the quality detection application, the product application, the browser application, the Virtual Reality (VR) application, the Augmented Reality (AR) application, and the like, which are not limited in this embodiment of the present application. In addition, the defects to be detected are different for different applications, so the defects can be glue defects or other defects. For example, whether the component is complete, whether the component is missing, and the like, are not limited in this application. Optionally, a client of the above application program runs in the terminal device 10.
The server 20 is used for providing background services for clients of target applications in the terminal device 10. For example, the server 20 may be an independent physical server, a server cluster or a distributed system composed of a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, a cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), and a big data and artificial intelligence platform, but is not limited thereto.
The terminal device 10 and the server 20 can communicate with each other via a network. The network may be a wired network or a wireless network.
In the method provided by the embodiment of the application, the execution subject of each step may be a computer device. The computer device may be any electronic device having data storage and processing capabilities. For example, the computer device may be the server 20 in fig. 1, may be the terminal device 10 in fig. 1, or may be another device other than the terminal device 10 and the server 20.
Please refer to fig. 2, which illustrates a schematic diagram of a method for determining a glue defect according to an embodiment of the present application.
Firstly, glue of a subdivision original piece of the camera at a first point is photographed to obtain a first point image in the figure 2, wherein 100 is the position of the neck glue of the camera, and the problem that the photographed glue has serious light reflection can be easily found. Therefore, if the related art is adopted, the glue area of the neck glue is extracted by binarizing the image according to the color feature of the neck glue, and the neck glue has the light reflection problem as shown in fig. 2, so that the complete glue area of the neck glue is difficult to extract in a binarizing mode, and great trouble is brought to the subsequent glue defect determination according to the glue area.
According to the technical scheme provided by the embodiment of the application, the characteristic information of the first bit image is extracted, wherein the characteristic information is deep learning information, and the glue area of the first bit image is determined based on the characteristic information. And determining different defect types according to the glue area. In the related art, the defect type is judged by extracting the characteristics of the glue. However, in the related art, the model is usually trained by using the samples with defects, so that the defect type can be determined according to the images after the first bit image of the sample is input, but in the method in the related art, because the samples with defects in the actual industrial production are relatively few, the training samples of the model in the related art are relatively few, and the training effect of the model is not ideal under the condition of few training samples. The glue defect of the first point image determined based on the model is not ideal. In addition, an automatic visual AI defect detection method is used for detecting defects, but considering that most glue defects are caused by unreasonable glue distribution areas, such as glue overflow, glue leakage, glue breakage and the like, and the glue areas are irregular, great challenges are brought to automatic visual AI defect detection.
The difference is that, the technical scheme provided by the embodiment of the application trains the model through a large number of first point images of the genuine products, and the training semantic segmentation model 110 can extract the complete glue area. Therefore, according to the technical scheme provided by the embodiment of the application, the extracted glue area 120 of the first point location is closer to the actual glue condition. From this glue area 120, glue defects can be further determined.
Therefore, according to the technical scheme provided by the embodiment of the application, the deep learning information of the first point location is extracted according to the semantic segmentation model, and then the glue area in the first point location image of the sample is determined, so that the possible glue defects are determined, the final glue defect detection effect is good, the glue defect detection method is closer to the actual situation, and the precision is higher.
Please refer to fig. 3, which illustrates a schematic diagram of a method for determining sample defects according to an embodiment of the present application.
The technical scheme provided by the embodiment of the application can be used as a part of quality testing capability, is applied to the quality testing of the camera module in an industrial AI quality testing platform, analyzes and judges abnormal neck glue, and gives the defect judgment of the sample together with the analysis results of other point positions.
Fig. 3 is a possible application scenario of the present application, i.e. the determination of the ultimate defects of a sample in the industry. Where the final defect is determined by combining defects from multiple points. The camera as shown in fig. 3 comprises a plurality of components: the lens comprises a light through hole, a lens cone, a lens, petals, a lens base, a flexible circuit board and a connector, wherein glue at the joint of the lens base and the flexible circuit board can be called neck glue. In the present application, the point location is not limited, and all the locations that need to be detected may be referred to as point locations, for example, sub-graphs (a), (b), and (c) in fig. 3 are three point location images obtained by shooting three different point locations. The sub-image (b) is the point location where the neck glue is located, so that neck glue detection is performed on the sub-image (b), that is, the glue defect at the position is determined by using the technical scheme provided by the embodiment of the application.
Therefore, the technical solution provided by the embodiment of the present application may be used to detect the defect at each point, and other defect detection methods may also be used, which are not limited in the present application. And finally, determining whether the sample is a bad sample according to the defect conditions of the plurality of point positions, determining the sample to be a good product if the sample is a good sample, and considering the sample to be required or destroyed if the sample is a bad sample, thereby further counting the yield of a batch of products.
Please refer to fig. 4, which illustrates a flowchart of a method for determining a glue defect according to an embodiment of the present application. The execution subject of the steps of the method can be the terminal device 10 or the server 20 in the embodiment environment shown in fig. 1. In the following method embodiments, for convenience of description, only the execution subject of each step is referred to as "computer equipment". The method may include at least one of the following steps (320-360):
and 320, acquiring a first point image of the sample, wherein the first point image shows a part of components in the sample and glue among the parts of components, and the first point image is an image obtained by shooting the first point of the sample.
Sample preparation: the products to be inspected are not known for possible defects. This application does not do the restriction to the kind of sample, can be the camera module, also can be other products, for example other products that need connect with glue such as robot module, microscope module, sensor module all can adopt the technical scheme that this application embodiment provided to remove the detection glue defect.
First dot image: an image taken of the first spot of the sample. The first point may be set manually or determined according to the specific condition of the sample, and the present application does not limit the first point. The first point includes, but is not limited to, the location of the neck gel. In some embodiments, the first bit image includes, but is not limited to, a grayscale image, a color image, a depth image. In some embodiments, the first pixel image is a gray image, and each pixel in the image corresponds to a gray value. In some embodiments, the first pixel image is a color image, and a red channel, a green channel, and a blue channel correspond to each pixel in the image. In some embodiments, the first pixel image is a depth image, and each pixel in the image corresponds to a depth value. The present application does not limit the type of the first bit image.
In some embodiments, the first point image should be taken at a fixed angle and at a standard angle, i.e., so that the sample taken may be horizontal. Therefore, if the shooting angle is not correct, the first captured image may be not horizontal, which may cause the subsequently determined glue area to be also not horizontal, and there may be a situation that the inclination or the like does not conform to the actual situation. Therefore, when the first point location image is not horizontal due to the problem of the shooting angle, the finally determined glue area can be closer to the real glue area by means of shooting one piece again, performing rotation correction on the shot first point location image, performing rotation correction on the determined glue area and the like. Therefore, the problem of inaccurate detection of glue defects caused by the problem of the shooting angle can be avoided.
Step 340, according to the feature information of the first bit image extracted by the semantic segmentation model, determining a glue area where glue is located in the first bit image, wherein the feature information is deep learning information extracted by the semantic segmentation model.
A semantic segmentation model: the semantic segmentation model in the embodiment of the present application is one of neural network models, and the present application does not limit the specific structure of the semantic segmentation model. The number of convolution layers and pooling layers included in the semantic segmentation model is not limited, and the size of the convolution kernel is not limited. The model capable of extracting the glue area through the characteristic information can be regarded as a semantic segmentation model in the embodiment of the application. In general, the semantic segmentation model is pre-trained. In some embodiments, the semantic segmentation model is pre-trained by a plurality of tagged first bitfield images. The semantic segmentation model 110 shown in fig. 2 calculates the value of the cross entropy loss function according to the difference between the predicted result (which may also be referred to as a predicted feature map) and the label, and then adjusts the parameters of the model. Alternatively, the semantic segmentation models include, but are not limited to, deepab v3, cascaded mask rcnn, etc. models.
Characteristic information: for characterizing the first bit image. The form of the feature information is not limited here, and may be in the form of a feature vector or in the form of a distance. In addition, the present application does not limit the type of the feature information, and the feature information may be detail feature information, semantic feature information, or semantic feature information superimposed on the detail feature information. For details, refer to the following examples, which are not repeated herein.
Glue area: and predicting the first point image through a semantic segmentation model to obtain a glue area. The glue area in the embodiment of the application refers to a glue area predicted according to an image and does not refer to an area where actual glue is located. In some embodiments, the glue area is highlighted based on the results of the model prediction.
And 360, determining the glue defects of the sample at the first point according to the distribution condition of the glue area.
Glue defect: the glue of the sample has a defect compared with the glue of the genuine product or compared with the glue of the genuine product, namely, the glue defect is considered to exist. The type of the glue defect is not limited in the application, and the glue defect can be at least one of glue shortage, glue leakage, glue breakage and glue overflow. The distribution condition of the glue area can be considered as the area size of the glue area, whether the glue area is complete, whether two ends of the glue area are symmetrical, and the like, and all the conditions can be used for determining the glue defect and can be considered as the distribution condition of the glue area.
Referring to fig. 5, a schematic diagram of a glue area provided by one embodiment of the present application is shown. Wherein subgraph (a) and subgraph (b) can be considered as the first dot image, wherein the circled position represents the position of the glue. Subgraph (c) and subgraph (d) are the glue areas determined from subgraph (a) and subgraph (b). The sub-image (c) can be regarded as a glue area of a genuine product, and the glue area in the sub-image (d) has a width different from that of the glue area in the sub-image (d) compared with that of the glue area in the sub-image (c), so that the glue in the sub-image (b) can be regarded as having a glue defect.
According to the technical scheme provided by the embodiment of the application, the image (also called as a first dot image) of the sample at the dot position (called as a first dot position) needing to detect the glue defect is obtained firstly, and then the characteristic information of the image is obtained through the semantic segmentation model, so that the glue area of the first dot image can be determined. After the glue area is determined, the glue defects of the sample at the first point are determined according to the distribution condition of the glue area. According to the technical scheme, the deep learning information of the first point image is extracted by using the semantic segmentation model, the glue area is determined by means of the deep learning information, the determination of the glue area is not limited by glue reflection, the precision of the glue area determined by means of the characteristic information is higher, and the glue area is closer to the actual situation. Therefore, the determined glue defects are more accurate according to the distribution condition of the glue area which is more real.
Please refer to fig. 6, which shows a flowchart of a method for determining a glue defect according to another embodiment of the present application. The execution subject of the steps of the method can be the terminal device 10 or the server 20 in the embodiment environment shown in fig. 1. In the following method embodiments, for convenience of description, only the execution subject of each step is referred to as "computer equipment". The method may include at least one of the following steps (320-360):
and step 320, acquiring a first point image of the sample, wherein the first point image displays part of components in the sample and glue among the part of components, and the first point image is an image obtained by shooting the first point of the sample.
In step 342, the semantic segmentation model includes a feature extraction submodel and a prediction submodel, and the feature information of the first bit image is obtained through the feature extraction submodel.
Feature extraction submodel: for extracting features of the first bit image. In some embodiments, the feature extraction submodel is a VGG16 network. Optionally, the feature extraction submodel includes a plurality of convolutional layers and a plurality of pooling layers.
A predictor model: the method is used for determining the prediction result of the first bit image according to the feature information of the first bit image, wherein the prediction result can also be a prediction feature map, and can also be the probability that each pixel belongs to different categories, and the category corresponding to the maximum probability value is determined as the prediction result of the current pixel. Optionally, the predictor model is a predictor module, including a convolutional layer.
In some embodiments, shallow features and deep features of the first bitmap image are extracted through the feature extraction submodel, the shallow features are used for representing detail features of the first bitmap image, and the deep features are used for representing semantic features of the first bitmap image. And fusing the shallow features and the deep features to obtain feature information of the first point image. In some embodiments, the detail features include more pixel information of the first dot image, and the semantic features include more deep abstract information, which can be used to represent the category information.
Referring to fig. 7, which shows a schematic diagram of a semantic segmentation model provided in an embodiment of the present application, a semantic segmentation model 700 includes a feature extraction sub-model, a prediction sub-model, and a decoder sub-model. Wherein, the feature extraction submodel comprises a plurality of convolution layers. Optionally, there are 5 convolutional layers. From left to right, the first convolution layer to the fifth convolution layer. In some embodiments, considering the output of the third convolutional layer as a shallow feature and the output of the fifth convolutional layer as a deep feature, it is easy to find that the shallow feature is closer to the first bitmap image and therefore has more detailed information, and therefore is used to characterize the detailed features of the first bitmap image. The deep features are far from the first point image, so that the detail information in the deep features is less, and more is semantic information, namely the deep features are used for representing the semantic features of the image, namely representing the category to which each pixel point in the image belongs. The shallow layer feature is processed by two convolution layers to obtain a deep layer feature, that is, the shallow layer feature is down sampled to obtain a deep layer feature, and the two features have different sizes. Therefore, when fusing the shallow feature and the deep feature, the deep feature may be upsampled, restored to the same size as the shallow feature, and then fused. In some embodiments, the features are fused using a decoder sub-module. In some embodiments, the fusion of features may be the superposition of dimensions, or may be obtained after a specific operation, including but not limited to addition, subtraction, multiplication, and division.
And 344, obtaining a mask image corresponding to the first bit image according to the characteristic information through the predictor model.
In some embodiments, two types of objects are set in advance in the model, the glue area is 1, and the other components or background area is 0. And when the predictor model detects the fused features, judging the probability of the category corresponding to each pixel point, and when the probability of the pixel point belonging to the glue area is greater than the probability of the pixel point belonging to the background area, determining the value corresponding to the pixel point as 1, otherwise, determining the value corresponding to the pixel point as 0. Thus, a mask image may be obtained, in which different pixel points have different values, which indicate the category to which the pixel point belongs.
And 346, determining a glue area where the glue in the first dot image is located according to the area where the pixel points with the values meeting the conditions in the mask image are located.
In this application, the condition is not limited, and the value may be equal to a preset value, or may be greater than a threshold value. In some embodiments, the area where the pixel point with the value of 1 is located is determined as the glue area where the glue is located in the first dot image.
In some embodiments, the region where the pixel point whose value satisfies the condition in the mask image is located is determined as a foreground region, and the foreground region includes at least one closed region. And determining the closed area with the largest area in the foreground area as the glue area where the glue is located.
Referring to fig. 8, a schematic diagram of a foreground region provided by an embodiment of the present application is shown. Optionally, the region where the pixel point whose value in the mask image satisfies the condition is located is determined as a foreground region, such as the region 80, the region 81, and the region 82 in fig. 8. The foreground area comprises at least one closed area, and the area with the largest area in the closed area, namely the area 81, is determined as the glue area where the glue is located. In some embodiments, the glue area in the foreground region may be determined according to a maximum connected component extraction algorithm.
And 360, determining the glue defects of the sample at the first point according to the distribution condition of the glue area.
According to the technical scheme, the semantic segmentation model is subdivided into the feature extraction submodel and the prediction submodel, the feature extraction submodel is used for extracting feature information, the prediction submodel is used for predicting the category to which the pixel points belong, the principle of the model can be made clearer, the division of the model is determined, and the glue area is determined better according to the feature information.
According to the technical scheme, the shallow feature and the deep feature of the first point image are extracted through the feature extraction submodel, and the detail feature and the semantic feature can be combined, so that the result accuracy of model prediction can be higher under the condition that the detail feature is lost as little as possible, and the model is closer to the actual condition.
According to the technical scheme, the glue area where the glue is located is determined to be the largest closed area in the foreground area, and some errors can be effectively reduced by removing the smaller closed areas. Of course, in the case where more than one closed area is present, it is stated that the glue at this first point has certain drawbacks, such as glue failure. Specific glue defects are identified in the examples below. Therefore, by extracting the maximum area of the closed region, errors can be reduced to a certain extent, so that the subsequent work is padded, and the data processing capacity is reduced to a certain extent.
Please refer to fig. 9, which shows a flowchart of a method for determining a glue defect according to another embodiment of the present application. The execution subject of the steps of the method can be the terminal device 10 or the server 20 in the embodiment environment shown in fig. 1. In the following method embodiments, for convenience of description, only the execution subject of each step is referred to as "computer equipment". The method may include at least one of the following steps (320-366):
and 320, acquiring a first point image of the sample, wherein the first point image shows a part of components in the sample and glue among the parts of components, and the first point image is an image obtained by shooting the first point of the sample.
Step 340, determining a glue area where the glue in the first bit image is located according to the feature information of the first bit image extracted by the semantic segmentation model, wherein the feature information is the deep learning information extracted by the semantic segmentation model.
At step 362, a plurality of edge points of the glue area are determined, and the plurality of edge points form an outline of the glue area.
In some embodiments, the mask image is scanned laterally, with the upper and lower endpoints as outline points of the glue area. In some embodiments, if the mask image is not horizontal, it needs to be rotation corrected.
And 364, predicting to obtain a plurality of fitting edge points of the glue according to the plurality of edge points, wherein the plurality of fitting edge points are closer to the edge of the glue than the plurality of edge points.
In some embodiments, since a plurality of abnormal points may occur in a plurality of edge points, which may also be understood as noise points, the edge points may be unreliable, in which case, the plurality of edge points need to be fitted, so that the fitted edge points are closer to the real situation.
In some embodiments, the fitted edge points are predicted using Huber regression. The Huber regression is one of the linear regressions, but the linear regression is changed. In the related art, linear regression uses an MSE (Mean Square Error) loss function, which is very not robust to abnormal points, and when there are many noise points, the fitting result may have a large deviation. Therefore, the technical scheme provided by the application adopts Huber regression, and the regression method uses an MAE (Mean Absolute Error) loss function when the Error between the predicted result (fitting edge point) and the real point (edge point) is larger, so that the gradient signal brought by abnormal points is reduced, the optimized result is closer to an optimal value, and the predicted fitting edge points are closer to the actual glue edge. The specific formula is as follows:
Figure BDA0003800066160000131
wherein, L represents the value of the loss function, and delta is an adjustable parameter, the larger the value is, the closer the value is to the MSE loss function, the smaller the value is, the closer the value is to the MAE loss function, and the value depends on the proportion of the abnormal points in practice. f (x) represents the predicted fitted edge points and y represents the edge points. The linear regression method is not limited, and all linear regression methods capable of predicting and fitting the edge points can be used in the technical scheme provided by the application.
And 366, determining the glue defects of the sample at the first point according to the distribution of the plurality of fitting edge points.
In some embodiments, the glue defect of the sample at the first point is determined according to the glue area surrounded by the fitting edge points.
If the ratio of the area of the region surrounded by the fitting edge points to the glue area of the certified product at the first point is larger than a third numerical value, determining that glue overflow exists in the first point of the sample; if the ratio of the area surrounded by the fitting edge points to the glue area of the certified product at the first point position is smaller than a fourth numerical value, determining that the sample has less glue at the first point position; wherein the third value is greater than the fourth value. In some embodiments, the third value is 1.5, that is, the ratio of the area of the region surrounded by the fitting edge points to the glue area of the genuine article at the first point location is greater than 1.5, and the area of the region surrounded by the fitting edge points is at least 50% greater than the glue area of the genuine article at the first point location, indicating that the sample has glue overflow at the first point location. In some embodiments, the fourth value is 0.5, that is, the ratio of the area of the region surrounded by the plurality of fitting edge points to the glue area of the genuine product at the first point location is less than 0.5, and the area of the region surrounded by the plurality of fitting edge points is at least 50% smaller than the glue area of the genuine product at the first point location, indicating that the sample has less glue at the first point location. The present application is not limited to the specific values of the third and fourth numerical values.
In some embodiments, the presence of glue defects in the first spot of the sample is determined based on the location of the plurality of fitted edge points.
If the average width of the area surrounded by the fitting edge points is larger than the average width of the glue of the certified product at the first point, determining that the overflow glue exists at the first point of the sample; and if the average width of the area surrounded by the fitting edge points is smaller than the average width of the glue of the certified product at the first point position, determining that the sample has less glue at the first point position. In some embodiments, the width between the fitted edge points of the glue in each column is measured and the average width is finally determined from the width of each column. In some embodiments, if the ratio of the average width of the region surrounded by the plurality of fitting edge points to the average width of the glue at the first point position of the genuine product is greater than the fifth value, it is determined that the sample has glue overflow at the first point position. In some embodiments, if the ratio of the average width of the region surrounded by the plurality of fitting edge points to the average width of the glue at the first point position of the genuine product is less than the sixth value, it is determined that the sample has glue overflow at the first point position. Wherein the fifth value is greater than the sixth value. In some embodiments, if the fifth value is 1.3, that is, the ratio of the average width of the region surrounded by the plurality of fitting edge points to the average width of the glue at the first point location of the genuine article is greater than 1.3, that is, the average width of the region surrounded by the plurality of fitting edge points is at least 30% greater than the average width of the glue at the first point location of the genuine article, it is determined that there is glue overflow of the sample at the first point location. In some embodiments, if the sixth value is 0.7, that is, the ratio of the average width of the region surrounded by the plurality of fitting edge points to the average width of the glue of the genuine article at the first point location is less than 0.7, that is, the average width of the region surrounded by the plurality of fitting edge points is at least 30% less than the average width of the glue of the genuine article at the first point location, it is determined that there is insufficient glue in the first point location.
In some embodiments, outliers in the edge points are determined from the edge points and the fitted edge points. When the ratio of the number of abnormal points to the number of all edge points is greater than the seventh value, it is considered that there is little glue. Alternatively, the seventh value is 0.2, and when the ratio of the number of outlier points to the number of all edge points is greater than 0.2, it is considered that there is little glue.
Referring to fig. 10, a schematic diagram of fitted edge points provided by an embodiment of the present application is shown. The sub-graph (a) in fig. 10 shows that the number of abnormal points is large, and it can be considered that a small amount of glue exists in a part of the area.
In some embodiments, the presence of glue defects in the sample at the first point is determined based on the symmetry of the plurality of fitted edge points. Judging whether the glue of the sample at the first point position is symmetrical or not according to the plurality of fitting edge points; and if the sample is not symmetrical, determining that the sample has less glue at the first point. In some embodiments, the width of the anomaly points on the two sides of the neck glue is calculated respectively: and scanning the plurality of fitting edge points from the two ends to the middle to find out the position of the first abnormal point, and calculating the distance from the position to the next abnormal point to be used as the width of the abnormal point at one end. Assuming that the width of the outliers at both ends is a and b, respectively, and the limit max (a, b)/min (a, b) < eighth value, alternatively, the eighth value is 2, if it is out of this range, it is indicated that there is asymmetry at both ends, and it is likely that there is a low glue at one end, where a and b are positive numbers. In the diagram (b) of fig. 10, it can be found that the width of the abnormal point on the right side is much larger than that of the abnormal point on the left side, so that it can be determined that there is little glue on the right side.
According to the technical scheme, the fitting edge points are determined according to the edge points, the noise points in partial edge points can be effectively avoided through the fitting edge points, the glue edge which is relatively close to the real situation is obtained, the influence of the noise points is reduced, and therefore robustness is better.
According to the technical scheme, the glue defect is determined according to the distribution situation of the fitting edge points, and the fitting edge points are closer to the real glue edge, so that the accuracy of the determination result of the glue defect can be improved, the detection effect is better, and the actual situation is better met.
According to the technical scheme provided by the embodiment of the application, different defects are judged according to the specific conditions of the fitted edge points, and the judgment result is high in accuracy. The final glue defect is determined according to the glue area condition surrounded by the fitting edge points, the position condition of the fitting edge points and the symmetry condition of the fitting edge points, and different defect judgment standards exist under different conditions, so that the defect judgment is more standardized and more accords with the actual defect judgment.
According to the technical scheme provided by the embodiment of the application, the glue defects are defined as internal defects, for the defects, firstly, a complete glue area is extracted through a depth image segmentation method (which can also be regarded as a semantic segmentation model), then, the incomplete or overflow part is detected through a geometric analysis method, and the defect degree is determined qualitatively according to the distortion degree. The technical scheme provided by the embodiment of the application makes full use of the shape characteristic of normal neck glue (genuine), and can correctly analyze the defects of glue at the two ends or the middle part. Firstly, modeling is carried out on the long side of the neck glue by Huber regression, and the abnormality of the neck glue is judged according to the distribution condition of abnormal points in the linear regression result. Therefore, a large number of sample images with glue defects do not need to be used for training, and high detection rate can be achieved in practical use.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 11, a block diagram of a glue defect determination apparatus according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The apparatus may be the computer device described above, or may be provided in a computer device. As shown in fig. 11, the apparatus 1400 may include: an image acquisition module 1410, a region determination module 1420, and a defect determination module 1430.
The image obtaining module 1410 is configured to obtain a first point image of a sample, where the first point image shows a part of components in the sample and glue between the part of components, and the first point image is an image obtained by shooting a first point of the sample.
The region determining module 1420 is configured to determine a glue region where the glue is located in the first bitmap image according to feature information of the first bitmap image extracted by a semantic segmentation model, where the feature information is deep learning information extracted by the semantic segmentation model.
The defect determining module 1430 is configured to determine, according to the distribution of the glue areas, a glue defect existing at the first point of the sample.
In some embodiments, the semantic segmentation model includes a feature extraction submodel and a prediction submodel.
In some embodiments, as shown in FIG. 12, the region determination module 1420 includes a feature acquisition unit 1422, an image determination unit 1424, and a region determination unit 1426.
The feature obtaining unit 1422 obtains feature information of the first bit image through the feature extraction submodel.
The image determining unit 1424 obtains a mask image corresponding to the first bit image according to the feature information by the predictor model.
The region determining unit 1426 determines, according to a region where a pixel point whose value satisfies a condition in the mask image is located, a glue region where the glue is located in the first dot image.
In some embodiments, the feature obtaining unit 1422 is configured to extract, by the feature extraction sub-model, a shallow feature and a deep feature of the first bitmap image, where the shallow feature is used to characterize a detail feature of the first bitmap image, and the deep feature is used to characterize a semantic feature of the first bitmap image.
The feature obtaining unit 1422 is configured to fuse the shallow feature and the deep feature to obtain feature information of the first bit image.
In some embodiments, the region determining unit 1426 is configured to determine, as a foreground region, a region where a pixel point whose value in the mask image meets a condition is located, where the foreground region includes at least one closed region.
The region determining unit 1426 is configured to determine a closed region with the largest area in the foreground region as a glue region where the glue is located.
In some embodiments, as shown in fig. 12, the defect determination module 1430 includes an edge point determination unit 1432 and a defect determination unit 1434.
The edge point determining unit 1432 is configured to determine a plurality of edge points of the glue area, where the plurality of edge points form an outline of the glue area.
The edge point determining unit 1432 is configured to predict, according to the plurality of edge points, a plurality of fitting edge points of the glue, where the plurality of fitting edge points are closer to an edge of the glue than the plurality of edge points.
The defect determining unit 1434 is configured to determine, according to a distribution of the fitting edge points, a glue defect existing at the first point of the sample.
In some embodiments, the defect determining unit 1434 is configured to determine a glue defect existing at the first point of the sample according to a glue area surrounded by the plurality of fitting edge points; or, according to the position conditions of the plurality of fitting edge points, determining the glue defects of the sample at the first point; or, determining the glue defect of the sample at the first point according to the symmetry condition of the plurality of fitting edge points.
In some embodiments, the glue defects include at least one of a lack of glue and a spill of glue.
The defect determining unit 1434 is configured to determine that glue overflows exist in the first point of the sample if a ratio of an area of a region surrounded by the fitting edge points to a glue area of a genuine product at the first point is greater than a third value; if the ratio of the area of the region surrounded by the fitting edge points to the glue area of the certified product at the first point position is smaller than a fourth numerical value, determining that the sample has less glue at the first point position; wherein the third value is greater than the fourth value.
The defect determining unit 1434 is configured to determine that there is an overflow of the sample at the first point if an average width of a region surrounded by the plurality of fitting edge points is greater than an average width of glue of the genuine product at the first point; and if the average width of the area surrounded by the fitting edge points is smaller than the average width of the glue of the certified product at the first point position, determining that the sample has less glue at the first point position.
The defect determining unit 1434 is configured to determine whether the glue of the sample at the first point location is symmetrical according to the plurality of fitted edge points; and if the sample is not symmetrical, determining that the sample has less glue at the first point.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 13 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
Generally, computer device 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, a 15-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1502 stores a computer program that is loaded and executed by the processor 1501 to implement the glue defect determination methods provided by the above-described method embodiments.
Those skilled in the art will appreciate that the architecture illustrated in FIG. 13 is not intended to be limiting of the computer device 1500, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium is also provided, in which a computer program is stored which, when being executed by a processor, is adapted to carry out a method of sizing water defect determination.
Optionally, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random Access Memory), SSD (Solid State drive), or optical disc. The Random Access Memory may include a ReRAM (resistive Random Access Memory) and a DRAM (Dynamic Random Access Memory).
In an exemplary embodiment, a computer program product is also provided, the computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer readable storage medium, and executes the computer program, so that the computer device executes the method for determining the glue defect.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only show an exemplary possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the illustrated sequence, which is not limited in this application.
The above description is only exemplary of the application and should not be taken as limiting the application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the application should be included in the protection scope of the application.

Claims (10)

1. A method for determining glue defects, the method comprising:
acquiring a first point image of a sample, wherein a part of components in the sample and glue among the part of components are displayed in the first point image, and the first point image is an image obtained by shooting a first point of the sample;
determining a glue area where the glue is located in the first locus image according to feature information of the first locus image extracted by a semantic segmentation model, wherein the feature information is deep learning information extracted by the semantic segmentation model;
and determining the glue defects of the sample at the first point according to the distribution condition of the glue area.
2. The method of claim 1, wherein the semantic segmentation model comprises a feature extraction submodel and a predictor submodel;
the determining a glue area where the glue is located in the first point image according to the feature information of the first point image extracted by the semantic segmentation model includes:
acquiring the feature information of the first point image through the feature extraction submodel;
obtaining a mask image corresponding to the first bit image according to the characteristic information through the predictor model;
and determining a glue area where the glue is located in the first dot image according to the area where the pixel points with the values meeting the conditions are located in the mask image.
3. The method of claim 2, wherein the obtaining the feature information of the first bitmap image through the feature extraction submodel comprises:
extracting a shallow feature and a deep feature of the first point image through the feature extraction submodel, wherein the shallow feature is used for representing a detail feature of the first point image, and the deep feature is used for representing a semantic feature of the first point image;
and fusing the shallow features and the deep features to obtain feature information of the first point image.
4. The method according to claim 2, wherein the determining the glue area where the glue is located in the first dot image according to the area where the pixel whose value satisfies the condition is located in the mask image comprises:
determining an area where a pixel point with a value meeting a condition in the mask image is located as a foreground area, wherein the foreground area comprises at least one closed area;
and determining the closed area with the largest area in the foreground area as the glue area where the glue is located.
5. The method of claim 1, wherein said determining the presence of glue defects in said sample at said first point based on said distribution of glue regions comprises:
determining a plurality of edge points of the glue area, wherein the edge points form the contour of the glue area;
predicting a plurality of fitting edge points of the glue according to the plurality of edge points, wherein the fitting edge points are closer to the edge of the glue than the edge points;
and determining the glue defects of the sample at the first point according to the distribution of the fitting edge points.
6. The method according to claim 5, wherein said determining the presence of glue defects on the sample at the first point based on the distribution of the plurality of fitted edge points comprises:
determining the glue defect of the sample at the first point according to the glue area condition surrounded by the fitting edge points;
or the like, or, alternatively,
determining the glue defects of the sample at the first point according to the position conditions of the fitting edge points;
or the like, or, alternatively,
and determining the glue defects of the sample at the first point according to the symmetry condition of the plurality of fitting edge points.
7. The method of claim 6, wherein the glue defects include at least one of glue starvation and glue bleed;
the determining the glue defect of the sample at the first point position according to the glue area condition surrounded by the fitting edge points comprises: if the ratio of the area of the region surrounded by the fitting edge points to the glue area of the genuine product at the first point is larger than a third value, determining that glue overflow exists in the first point of the sample; if the ratio of the area of the region surrounded by the fitting edge points to the glue area of the certified product at the first point position is smaller than a fourth numerical value, determining that the sample has less glue at the first point position; wherein the third value is greater than the fourth value;
the determining the glue defects of the sample at the first point according to the position conditions of the plurality of fitted edge points comprises: if the average width of the area surrounded by the fitting edge points is larger than the average width of the glue of the genuine product at the first point, determining that the overflow glue exists at the first point of the sample; if the average width of the area surrounded by the fitting edge points is smaller than the average width of the glue of the certified product at the first point position, determining that the sample has less glue at the first point position;
the step of determining the glue defect of the sample at the first point according to the symmetry condition of the plurality of fitted edge points comprises: judging whether the glue of the sample at the first point position is symmetrical or not according to the fitting edge points; and if the sample is not symmetrical, determining that the sample has less glue at the first point.
8. An apparatus for determining a glue defect, the apparatus comprising:
the system comprises an image acquisition module, a data acquisition module and a data processing module, wherein the image acquisition module is used for acquiring a first point image of a sample, the first point image is displayed with partial components in the sample and glue among the partial components, and the first point image is an image obtained by shooting a first point of the sample;
the region determining module is used for determining a glue region where the glue is located in the first bit image according to the feature information of the first bit image extracted by the semantic segmentation model, wherein the feature information is deep learning information extracted by the semantic segmentation model;
and the defect determining module is used for determining the glue defects of the sample at the first point according to the distribution condition of the glue area.
9. A computer device, characterized in that it comprises a processor and a memory, in which a computer program is stored, which computer program is loaded and executed by the processor to implement the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which is loaded and executed by a processor to implement the method according to any one of claims 1 to 7.
CN202210980120.7A 2022-08-16 2022-08-16 Glue defect determining method, device, equipment and storage medium Pending CN115358981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210980120.7A CN115358981A (en) 2022-08-16 2022-08-16 Glue defect determining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210980120.7A CN115358981A (en) 2022-08-16 2022-08-16 Glue defect determining method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115358981A true CN115358981A (en) 2022-11-18

Family

ID=84001303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210980120.7A Pending CN115358981A (en) 2022-08-16 2022-08-16 Glue defect determining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115358981A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116609344A (en) * 2023-07-17 2023-08-18 苏州思谋智能科技有限公司 Defect detection method, device and equipment for camera socket and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251352A (en) * 2016-07-29 2016-12-21 武汉大学 A kind of cover defect inspection method based on image procossing
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method
CN111767875A (en) * 2020-07-06 2020-10-13 中兴飞流信息科技有限公司 Tunnel smoke detection method based on instance segmentation
CN111768415A (en) * 2020-06-15 2020-10-13 哈尔滨工程大学 Image instance segmentation method without quantization pooling
CN111862092A (en) * 2020-08-05 2020-10-30 复旦大学 Express delivery outer package defect detection method and device based on deep learning
CN113344901A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Gluing defect detection method and device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251352A (en) * 2016-07-29 2016-12-21 武汉大学 A kind of cover defect inspection method based on image procossing
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method
CN111768415A (en) * 2020-06-15 2020-10-13 哈尔滨工程大学 Image instance segmentation method without quantization pooling
CN111767875A (en) * 2020-07-06 2020-10-13 中兴飞流信息科技有限公司 Tunnel smoke detection method based on instance segmentation
CN111862092A (en) * 2020-08-05 2020-10-30 复旦大学 Express delivery outer package defect detection method and device based on deep learning
CN113344901A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Gluing defect detection method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116609344A (en) * 2023-07-17 2023-08-18 苏州思谋智能科技有限公司 Defect detection method, device and equipment for camera socket and storage medium
CN116609344B (en) * 2023-07-17 2023-11-03 苏州思谋智能科技有限公司 Defect detection method, device and equipment for camera socket and storage medium

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
CN110059741B (en) Image recognition method based on semantic capsule fusion network
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN112052839A (en) Image data processing method, apparatus, device and medium
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN111445459A (en) Image defect detection method and system based on depth twin network
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN109740539B (en) 3D object identification method based on ultralimit learning machine and fusion convolution network
CN108447060B (en) Foreground and background separation method based on RGB-D image and foreground and background separation device thereof
CN112257665A (en) Image content recognition method, image recognition model training method, and medium
CN114331949A (en) Image data processing method, computer equipment and readable storage medium
CN112819008B (en) Method, device, medium and electronic equipment for optimizing instance detection network
CN115861210B (en) Transformer substation equipment abnormality detection method and system based on twin network
CN113516126A (en) Adaptive threshold scene text detection method based on attention feature fusion
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN113516146A (en) Data classification method, computer and readable storage medium
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN111144425B (en) Method and device for detecting shot screen picture, electronic equipment and storage medium
CN114331946A (en) Image data processing method, device and medium
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN115937626A (en) Automatic generation method of semi-virtual data set based on instance segmentation
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
CN117252815A (en) Industrial part defect detection method, system, equipment and storage medium based on 2D-3D multi-mode image
CN115358981A (en) Glue defect determining method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination