CN117853826B - Object surface precision identification method based on machine vision and related equipment - Google Patents

Object surface precision identification method based on machine vision and related equipment Download PDF

Info

Publication number
CN117853826B
CN117853826B CN202410260380.6A CN202410260380A CN117853826B CN 117853826 B CN117853826 B CN 117853826B CN 202410260380 A CN202410260380 A CN 202410260380A CN 117853826 B CN117853826 B CN 117853826B
Authority
CN
China
Prior art keywords
object surface
surface image
image
detection
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410260380.6A
Other languages
Chinese (zh)
Other versions
CN117853826A (en
Inventor
杨登丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tgb Precision Technology Shenzhen Co ltd
Original Assignee
Tgb Precision Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tgb Precision Technology Shenzhen Co ltd filed Critical Tgb Precision Technology Shenzhen Co ltd
Priority to CN202410260380.6A priority Critical patent/CN117853826B/en
Publication of CN117853826A publication Critical patent/CN117853826A/en
Application granted granted Critical
Publication of CN117853826B publication Critical patent/CN117853826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an object surface precision identification method based on machine vision and related equipment, which do not need to debug a corresponding neural network for each object surface image type, and do not need to identify the object surface image type of an object surface image according to classification, but instead adopt an object surface image detection network with image front-back self-correlation information understanding performance. Because the object surface image detection network can understand the quality detection specifications corresponding to different object surface image types according to the front-back auto-correlation information of the images, the quality detection specifications are used for indicating the characteristic information of the object surface image types, and then the object surface image detection network can identify the object surface image types of the object surface image to be detected through the different quality detection specifications. The applicability of the object surface image detection network can be increased according to the preset various quality detection specifications, and the adaptability of the object surface image detection network in use is increased, so that the network cost is reduced.

Description

Object surface precision identification method based on machine vision and related equipment
Technical Field
The application relates to the technical fields of data processing and computer vision, in particular to an object surface precision identification method based on machine vision and related equipment.
Background
With the continuous development of industrial manufacturing and automation technology, object surface quality detection has become an indispensable link in the production process. Conventional methods of object surface quality inspection typically rely on manual visual inspection or use of simple machine vision systems, but these methods are often limited by subjectivity of the human eye, fatigue, and simplicity of the machine vision system, resulting in inefficient inspection and error-prone. Therefore, developing an efficient and accurate object surface image detection method is an urgent need in the industry. In recent years, the deep learning technology has made a remarkable breakthrough in the field of image processing, and particularly, models such as Convolutional Neural Networks (CNNs) and the like show excellent performances in tasks such as image classification, target detection and the like. However, when the existing deep learning model is applied to object surface image detection, the problems of insufficient training data, weak model generalization capability and the like are often faced. Furthermore, due to the variety and complexity of object surface defects, how to efficiently extract and utilize feature information in images is also a challenge. In the field of machine vision, whether an object surface has defects or not is generally achieved by classifying and identifying images, namely, classifying the object surface images in corresponding categories, and training corresponding neural network algorithms for different types of images. However, since the specifications of the recognition are different, the classification is different, for example, the specification of defect recognition may include different roughness, waviness, shape, texture, color, etc., and different classification algorithms need to be trained for different specifications to recognize in the prior art, so that the training and reasoning cost of the algorithm is too high and the flexibility is insufficient.
Disclosure of Invention
In view of this, the embodiments of the present application at least provide a machine vision-based object surface accuracy recognition method and related apparatus.
The technical scheme of the embodiment of the application is realized as follows:
In one aspect, an embodiment of the present application provides a machine vision-based object surface accuracy recognition method, where the method includes: acquiring a surface image of an object to be detected; determining a semantic commonality measurement result between the object surface image to be detected and a first reference object defect image, and determining a first reference object defect image corresponding to the semantic commonality measurement result meeting a first setting requirement as a first matching reference object defect image, wherein the first reference object defect image and a quality detection specification comprise a first mapping relation, the quality detection specification is used for indicating characteristic information of an object surface image type corresponding to the object surface image, and the first reference object defect image is an object surface image meeting the quality detection specification comprising the first mapping relation; determining a target quality detection specification corresponding to the first matching reference object defect image based on the first mapping relation; and detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification to obtain the object surface image type of the object surface image to be detected, wherein the object surface image detection network is a neural network with image front-back autocorrelation information understanding performance.
In some embodiments, the method further comprises: determining a semantic commonality measurement result between the object surface image to be detected and a second reference object defect image, and determining a second reference object defect image corresponding to the semantic commonality measurement result meeting a second setting requirement as a second matching reference object defect image, wherein the second reference object defect image is an object surface image for generating object surface image type false detection; detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification to obtain an object surface image type of the object surface image to be detected, including: and detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the second matching reference object defect image to obtain the object surface image type of the object surface image to be detected.
In some embodiments, the quality inspection specification and inspection item comprise a second mapping relationship, the inspection item to indicate an object surface image type, the second reference object defect image and the inspection item comprising a third mapping relationship; if the number of the second matching reference object defect images is a plurality of, detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the second matching reference object defect images to obtain an object surface image type of the object surface image to be detected, wherein the method comprises the following steps: determining a target detection item corresponding to the target quality detection specification based on the second mapping relation; determining candidate detection items respectively corresponding to a plurality of second matching reference object defect images based on the third mapping relation; determining a target matching reference object defect image from a plurality of second matching reference object defect images based on the target detection item and a plurality of candidate detection items, wherein the candidate detection items corresponding to the target matching reference object defect image are identical to the target detection item; and detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the target matching reference object defect image to obtain the object surface image type of the object surface image to be detected.
In some embodiments, the quality inspection specification includes a second mapping relationship with an inspection item, the inspection item to indicate an object surface image type, the method further comprising: acquiring a target detection item corresponding to the surface image of the object to be detected; the determining, based on the first mapping relationship, a target quality detection specification corresponding to the first matching reference object defect image includes: determining a plurality of temporary quality detection specifications corresponding to the first matching reference object defect image based on the first mapping relation; and determining a target quality detection specification corresponding to the target detection item from a plurality of temporary quality detection specifications based on the second mapping relation.
In some embodiments, the quality inspection specification includes a second mapping relationship with an inspection item, the inspection item to indicate an object surface image type, the method further comprising: acquiring a target detection item corresponding to the surface image of the object to be detected; determining a plurality of temporary quality detection specifications corresponding to the target detection items based on the second mapping relation; the determining a semantic commonality measurement result between the object surface image to be detected and the first reference object defect image, and determining the first reference object defect image corresponding to the semantic commonality measurement result meeting the first setting requirement as a first matching reference object defect image, includes: determining a first temporary reference object defect image corresponding to the temporary quality detection specification based on the first mapping relation; determining a semantic commonality measurement result between the object surface image to be detected and the first temporary reference object defect image, and determining the first temporary reference object defect image corresponding to the semantic commonality measurement result meeting the first setting requirement as a first matching reference object defect image.
In some embodiments, the detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification, to obtain an object surface image type of the object surface image to be detected, includes: generating a first annotation proposal sample based on the object surface image to be detected and the target quality detection specification, wherein the first annotation proposal sample is used for indicating the object surface image detection network to generate a corresponding image position of an object surface image type for obtaining the object surface image to be detected; and detecting the object surface image to be detected through the object surface image detection network based on the first annotation proposal sample to obtain the object surface image type and the corresponding image position of the object surface image to be detected.
In some embodiments, if the number of the object surface images to be detected is greater than a set critical number, detecting, by an object surface image detection network, the object surface image to be detected according to the target quality detection specification, to obtain an object surface image type of the object surface image to be detected, including: generating a second annotation proposal sample based on the object surface image to be detected and the target quality detection specification, wherein the second annotation proposal sample is used for indicating a network positioning proposal area for detecting the object surface image, and the proposal area is used for indicating the position of an object surface image area meeting the target quality detection specification in the object surface image to be detected; and detecting the object surface image to be detected through the object surface image detection network based on the second annotation proposal sample to obtain the object surface image type and the proposal area of the object surface image to be detected.
In some embodiments, the method further comprises: acquiring a reference object defect image, wherein the reference object defect image is the first reference object defect image or the second reference object defect image; excavating characteristic information of the reference object defect image to obtain a reference characterization vector, and storing the reference characterization vector; the semantic commonality measurement result is determined in the following manner: excavating characteristic information of the surface image of the object to be detected to obtain a characterization vector to be detected; and determining a semantic commonality measurement result between the to-be-detected characterization vector and the reference characterization vector.
In some embodiments, if the reference object defect image is the second reference object defect image and the reference characterization vector is the second reference characterization vector, mining feature information of the reference object defect image to obtain the reference characterization vector, including: grouping the plurality of second reference object defect images to obtain a plurality of reference object defect image sets, wherein the second reference object defect images included in each reference object defect image set represent the same quality detection specification; and excavating characteristic information of each reference object defect image set to obtain a plurality of second reference characterization vectors.
In some embodiments, the training method of the object surface image detection network is as follows: acquiring an object surface image sample and a quality detection standard sample corresponding to the object surface image sample, wherein the object surface image sample comprises an priori mark for indicating the type of the object surface image corresponding to the object surface image sample; fusing the object surface image sample, the quality detection standard sample and the guide variable to obtain object surface image input data; detecting the object surface image sample through an initial object surface image detection network based on the object surface image input data to obtain a predicted object surface image type of the object surface image sample; and correcting the neural network parameters corresponding to the guide variables based on the errors between the type of the predicted object surface image and the prior marks of the object surface image sample to obtain the object surface image detection network.
In some embodiments, the fusing the object surface image sample and the guiding variable to obtain object surface image input data includes: combining the object surface image sample, the quality detection standard sample and the guide variable to obtain a combined object surface image; and obtaining object surface image input data based on the combined object surface image and the second reference object defect image, wherein the second reference object defect image is an object surface image which generates false detection of the object surface image type.
In another aspect, the application provides a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing the steps in the method described above when the computer program is executed.
The beneficial effects of the application at least comprise: according to the object surface precision identification method and the related equipment based on machine vision, the debugging of the corresponding neural network is not needed to be carried out on each object surface image type, the object surface image type of the object surface image is not needed to be identified according to classification, and instead, the object surface image detection network with the image front-back self-correlation information understanding performance is adopted. Because the object surface image detection network can understand the quality detection specifications corresponding to different object surface image types according to the front-back auto-correlation information of the images, the quality detection specifications are used for indicating the characteristic information of the object surface image types, and then the object surface image detection network can identify the object surface image types of the object surface image to be detected through the different quality detection specifications. In order to increase the application of the object surface image detection network, various quality detection specifications may be established in advance. Because the quality inspection specifications have a large number, in order to increase the application capability of the object surface image accuracy recognition, the application matches for each quality inspection specification a first reference object defect image, which is an object surface image satisfying the characteristic information of the object surface image type described by the quality inspection specification. After the object surface image to be detected is obtained, a semantic commonality measurement result between the object surface image to be detected and each first reference object defect image is obtained, the first reference object defect image corresponding to the semantic commonality measurement result meeting the first setting requirement is determined to be a first matching reference object defect image, the semantic commonality measurement result of the first matching reference object defect image and the object surface image to be detected is higher, then a target quality detection specification corresponding to the first matching reference object defect image can be determined according to a first mapping relation contained in the first reference object defect image and a quality detection specification, the target quality detection specification is a quality detection specification applicable to the object surface image to be detected, and an object surface image type corresponding to the object surface image to be detected under the target quality detection specification can be identified by adopting an object surface image detection network. Then, according to the semantic commonality measurement result, a proper target quality detection specification is automatically locked for the object surface image to be detected, and based on an object surface image detection network with image front-back autocorrelation information understanding performance, the object surface image type of the object surface image to be detected is identified according to the target quality detection specification. The applicability of the object surface image detection network can be increased according to the preset establishment of various quality detection specifications, so that one object surface image detection network can identify object surface images of various object surface image types, the adaptability of the object surface image detection network in use is increased, and the network cost is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic implementation flow chart of an object surface accuracy recognition method based on machine vision according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application.
Detailed Description
The technical solution of the present application will be further elaborated with reference to the accompanying drawings and examples, which should not be construed as limiting the application, but all other embodiments which can be obtained by one skilled in the art without making inventive efforts are within the scope of protection of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict. The term "first/second/third" is merely to distinguish similar objects and does not represent a particular ordering of objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, as allowed, to enable embodiments of the application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing the application only and is not intended to be limiting of the application.
The embodiment of the application provides a machine vision-based object surface precision identification method, which can be executed by a processor of computer equipment. The computer device may refer to a device with data processing capability such as a server, a notebook computer, a tablet computer, a desktop computer, a smart television, a mobile device (e.g., a mobile phone, a portable video player, a personal digital assistant, a dedicated messaging device, a portable game device), etc.
Fig. 1 is a schematic implementation flow chart of a machine vision-based object surface accuracy recognition method according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
step S10: and acquiring a surface image of the object to be detected.
In actual operation, the computer device performs this step through the equipped machine vision system. Machine vision systems typically include an image acquisition device, such as a camera or scanner, for capturing an image of a surface of an object. These image acquisition devices are configured to operate at specific resolution and light conditions to ensure that the quality and accuracy of the acquired image meets the requirements of subsequent processing. For example, assume a computer device is equipped with a high resolution industrial camera that is mounted in a fixed location and provides uniform lighting conditions through a suitable lighting system. When an object to be detected is placed within the field of view of the camera, the camera is triggered for image capture. In this process, the computer device optimizes the capture of the image by controlling parameters of the camera (e.g., exposure time, focal length, etc.). Once the image is captured, it is transferred to the memory of the computer device and stored as an image of the surface of the object to be inspected. The image will contain various detailed information of the object surface, such as texture, color, shape, etc., which are critical for precision recognition and classification in subsequent steps.
Step S20: determining a semantic commonality measurement result between an object surface image to be detected and a first reference object defect image, and determining the first reference object defect image corresponding to the semantic commonality measurement result meeting a first setting requirement as a first matching reference object defect image, wherein the first reference object defect image and a quality detection specification comprise a first mapping relation, the quality detection specification is used for indicating characteristic information of an object surface image type corresponding to the object surface image, and the first reference object defect image is an object surface image meeting the quality detection specification comprising the first mapping relation.
In step S20, the semantic commonality measurement result may be understood as the similarity or matching degree between the image to be detected and the reference image on the semantic level. The computer device extracts features of the images and compares them through image processing and machine learning techniques, such as Convolutional Neural Networks (CNNs) in deep learning. These features may include color, texture, shape, etc., which together form the semantic information of the image.
For example, assume that the image of the surface of the object to be inspected is a photograph of the surface of an electronic product, and the first library of reference object defect images includes a plurality of different types of electronic product defect images, such as cracks, knots, discoloration, and the like. The computer device will first perform feature extraction on both images using a pre-trained convolutional neural network model. The extracted features may be texture features, color distribution features, etc. of the electronic product surface. Then, by calculating the similarity or distance between the features, a semantic commonality measurement result is obtained. Next, according to the first setting requirement, the computer device will screen out the first reference object defect image corresponding to the semantic commonality measurement result satisfying the condition (i.e. the first setting requirement). These conditions may be similarity thresholds, ranks, etc., depending on the application scenario and requirements. The first reference object defect image satisfying the condition is regarded as an image having a semantically higher commonality with the image to be detected, i.e., a first matching reference object defect image.
In addition, step S20 involves a first mapping between the quality inspection specifications and the first reference object defect image. The quality inspection specification is a standard or criterion for characterizing information describing the type of image of the surface of the object. It may be a schematic image (e.g. an image showing the corresponding detection criterion, such as texture), a textual description (e.g. a direct description of the corresponding inspection criterion) or other forms of information, such as parameter values, such as maximum allowable values of surface roughness. Each first reference object defect image is associated with a quality inspection specification, the association being established by a first mapping relationship. In practical applications, this mapping may be implemented in various ways, for example using tags, metadata or databases, etc. When the computer device finds the first matching reference object defect image, it can quickly determine the quality detection specification associated therewith using this mapping relationship. This quality inspection specification will provide important basis and guidance for subsequent object surface image type identification.
By determining the semantic commonality measurement result and the first matching reference object defect image, a solid foundation is laid for the subsequent object surface precision identification process. The method enables the computer equipment to accurately understand the similarity between the image to be detected and the reference image, and performs subsequent processing and identification operation according to the quality detection specification.
Step S30: and determining a target quality detection specification corresponding to the first matching reference object defect image based on the first mapping relation.
Specifically, when executing step S30, the computer device first accesses a data structure or database storing the first mapping relation. This data structure or database records the relationship between each reference object defect image and its corresponding quality inspection specifications. The quality inspection specifications are a detailed standard describing the specific conditions or characteristics that the surface of an object should meet, such as flatness, gloss, flawless, etc.
For example, if the first matching reference object defect image is a specific type of electronic product surface defect image, such as a crack, the target quality detection specification corresponding to the first matching reference object defect image may specify parameters such as a maximum width, a length, and a depth of the crack. These parameters will be used for subsequent object surface accuracy assessment.
In practice, the computer device will automatically identify and resolve features of the first matching reference object defect image using image processing techniques and machine learning algorithms. These features may be extracted by a deep learning model such as Convolutional Neural Network (CNN), such as texture, color, shape, etc. of the image. Then, based on these features and the first mapping relation, the computer device can accurately find a target quality detection specification corresponding thereto. This process is fully automated, without human intervention. The computer equipment not only can quickly and accurately determine the target quality detection specification, but also can provide reliable standards and references for subsequent object surface precision evaluation. Thus, the whole identification process is more efficient, accurate and reliable.
Step S40: according to the target quality detection specification, detecting the object surface image to be detected through an object surface image detection network to obtain the object surface image type of the object surface image to be detected, wherein the object surface image detection network is a neural network with image front-back autocorrelation information understanding performance.
Specifically, the computer device invokes a pre-trained object surface image detection network. The network is a neural network with image front-back autocorrelation information (namely context information) understanding performance, and can capture detail and spatial relations in the image so as to accurately identify the surface of an object. The design of this neural network enables it to process complex image information and extract features that are critical to determining the type of object surface. Specifically, the object surface image detection network may adopt a Convolutional Neural Network (CNN) structure, and combine with techniques such as an attention mechanism, a Recurrent Neural Network (RNN) or a long short term memory network (LSTM) to enhance the context information understanding capability. The network structure can effectively process sequence information and space dependence in the image, so that the accuracy of identifying the surface type of the object is improved.
For example, if the object surface image to be detected is a photograph of the surface of an electronic product, the object surface image detection network will first pre-process the image, such as scaling, clipping or normalizing, to adapt to the input requirements of the network. The network then extracts features of the image, such as texture, color, shape, etc., layer by layer and performs a comprehensive analysis of these features in combination with the context information. Finally, the network outputs a recognition result indicating the type of the surface image of the electronic product, such as a smooth surface, a rough surface or a defective surface. This recognition result is derived based on a target quality detection specification, which ensures that the accuracy assessment of the object surface is consistent with established quality standards. Through step S40, the computer device can automatically complete accurate identification of a large number of object surface images, and provides powerful data support for subsequent quality control and improvement.
As an implementation manner, the method provided by the embodiment of the application further includes the steps of: determining a semantic commonality measurement result between the object surface image to be detected and a second reference object defect image, and determining the second reference object defect image corresponding to the semantic commonality measurement result meeting a second setting requirement as a second matching reference object defect image, wherein the second reference object defect image is an object surface image for generating false detection of the object surface image type.
Then, based on this, step S40, according to the target quality detection specification, detects the object surface image to be detected through the object surface image detection network, to obtain the object surface image type of the object surface image to be detected, which specifically may include: and detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification and the second matching reference object defect image to obtain the object surface image type of the object surface image to be detected.
As an implementation mode, the method provided by the embodiment of the application is expanded on the basis of the original method, the determination of the semantic commonality measurement result between the object surface image to be detected and the second reference object defect image is increased, and the proper second reference object defect image is selected according to the set second setting requirement, so that the accuracy of object surface image type detection is further improved. The second reference object defect image refers to object surface images which are easy to generate false detection in actual detection, and the method can be better suitable for complex and changeable actual detection scenes by introducing the images.
In particular, in determining the semantic commonality metric between the object surface image to be detected and the second reference object defect image, the computer device may utilize a machine learning algorithm, such as a deep neural network, to extract feature vectors of the object surface image to be detected and the second reference image. These feature vectors contain key information in the images, such as texture, shape, color, etc., which are used to calculate semantic similarity between the images. The semantic commonality measurement reflects the similarity of the image to be detected and the second reference image at the semantic level, which may be a value or vector, depending on the measurement method used.
For example, the computer device may use cosine similarity to calculate the angles between feature vectors, resulting in a semantic commonality measure. The cosine similarity has a value ranging from-1 to 1, with a value closer to 1 indicating that the two images are semantically similar. By setting a suitable threshold, the computer device may screen out a second reference image that is semantically sufficiently similar to the image to be detected as a candidate matching image.
Next, according to a second setting requirement, the computer device selects a most suitable second matching reference object defect image from the candidate matching images. The second setting requirement may be a series of rules or conditions for ensuring that the selected second matching image is semantically similar to the image to be detected, and can reflect a false detection situation possibly occurring in actual detection. These rules or conditions may be formulated based on a priori knowledge, expert experience, or actual data distribution. After the second matching reference object defect image is determined, the execution of step S40 will be different. In particular, the computer device will no longer detect the object surface image to be detected solely in accordance with the target quality detection specification, but will take into account the information of the second matching reference image at the same time. This means that the features or context information associated with the second matching image may be introduced at the input or during processing of the object surface image detection network, thereby enhancing the network's ability to identify and process false detection situations.
For example, in Convolutional Neural Networks (CNNs), which are commonly used in the computer vision field, the information of the second matching image may be fused by adding additional input channels or feature maps. These additional channels or feature maps may contain feature vectors, semantic tags, or other relevant information for the second matching image. In this way, the CNN can learn the feature representations of the image to be detected and the second matching image at the same time, and comprehensively consider the information of the two images in the decision stage, so that the type of the object surface image to be detected can be more accurately identified.
As one embodiment, the quality detection specification and the detection item (i.e. an identification task) include a second mapping relationship, the detection item is used for indicating the type of the image on the surface of the object, and the second reference object defect image and the detection item include a third mapping relationship;
If the number of the second matching reference object defect images is plural, detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the second matching reference object defect images to obtain an object surface image type of the object surface image to be detected, which specifically includes:
step S41: determining a target detection item corresponding to the target quality detection specification based on the second mapping relation;
Step S42: determining candidate detection items corresponding to the plurality of second matching reference object defect images respectively based on the third mapping relation;
Step S43: determining a target matching reference object defect image from a plurality of second matching reference object defect images based on the target detection item and the plurality of candidate detection items, wherein the candidate detection items corresponding to the target matching reference object defect image are identical to the target detection item;
step S44: and detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification and the target matching reference object defect image to obtain the object surface image type of the object surface image to be detected.
The second and third mappings are key relationships established when implementing the quality detection scheme that help computer devices understand and handle complex detection tasks. The second mapping relationship refers to a correspondence relationship between the quality detection specification and the detection item (recognition task). The quality inspection specifications are a set of rules or guidelines defining quality criteria that should be met by the surface of an object. The detection item is a specific detection task to be performed, and is generally used for identifying a specific type or feature of the surface of the object. For example, in the electronics manufacturing industry, quality inspection specifications may include requirements for various aspects of surface flatness, flaw size, color uniformity, and the like. The corresponding detection items can be 'electronic products with substandard surface flatness identification', 'defects on the surface of the electronic products detection', 'consistency of color of the electronic products evaluation', and the like. The second mapping relationship is established so that each quality detection criterion can find one or more detection items corresponding to the quality detection criterion. Thus, when quality detection is required, the computer device can quickly determine detection tasks to be executed according to the established specification.
The third mapping relationship is a corresponding relationship between the detection item and the second reference object defect image. The second reference object defect image refers to an image for assisting in identifying the object surface defect in the actual inspection process. These images typically contain defect types that are difficult to identify or prone to false positives in normal detection. Continuing with electronic product processing as an example, certain types of electronic product flaws may be easily ignored or misinterpreted in conventional inspection. In order to improve the accuracy of the detection, these difficult-to-identify flaw images may be collected as the second reference object flaw image. These images are then associated with a specific detection item by a third mapping relationship. For example, one test item may be "identify light flaws on the surface of an electronic product". The corresponding second reference object defect image may be an image of the surface of the electronic product containing various light defects. When the computer device performs this inspection item, it refers to the images to enhance the ability to identify light flaws.
By establishing the third mapping relation, the computer equipment can fully utilize the information provided by the second reference object defect image when executing the detection task, thereby improving the accuracy and reliability of detection. The mapping relation is also beneficial to reducing the situations of misjudgment and missed detection, and improves the overall detection efficiency.
As an embodiment, a second mapping relationship exists between the quality inspection specification and the inspection item (i.e., one recognition task), while a third mapping relationship exists between the inspection item and the second reference object defect image. The mapping relations provide clear guidance for computer equipment in quality detection, and accuracy and high efficiency of a detection process are ensured. The quality inspection specifications are criteria for evaluating the accuracy of the surface of an object, and the inspection items specifically indicate the type of image of the surface of the object that needs to be identified. The second mapping relationship associates quality inspection specifications with corresponding inspection items, ensuring that each inspection has a clear purpose and standard. For example, a quality inspection specification may require inspection of all electronic product surface imperfections, and the corresponding inspection item is an image type identifying electronic product surface imperfections. The second reference object defect image is an image which is easy to generate false detection in actual detection, and the third mapping relation between the second reference object defect image and the detection item provides additional reference information for the computer equipment. When there are a plurality of second matching reference object defect images, the computer device needs to further filter according to these mapping relationships to determine the most relevant reference image.
Specifically, the computer device first determines a target detection item corresponding to the target quality detection specification based on the second mapping relationship. For example, if the target quality inspection specification is related to the inspection of surface defects of electronic products, the target inspection item is to identify the image type of the surface defects of the electronic products. Next, the computer device determines candidate detection items respectively corresponding to the plurality of second matching reference object defect images based on the third mapping relationship. These candidate detection items are potential identification tasks associated with the second matching reference image. For example, some of the second matching reference images may contain different types of electronic product surface flaws, and each flaw type corresponds to a candidate inspection item. The computer device then determines a target matching reference object defect image from the plurality of second matching reference object defect images by comparing the target detection item with the plurality of candidate detection items. The target matching reference image is the image most relevant to the target detection item, i.e. its corresponding candidate detection item is identical to the target detection item. In this way, the computer device can ensure that the most representative reference image is used in the subsequent detection process.
Finally, the computer equipment detects the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the target matching reference object defect image. The network is a neural network with image understanding capability, and can accurately identify the type of the object surface image to be detected according to the input target quality detection specification and the target matching reference image. In this way, the computer device can effectively utilize the mapping relationship and the reference image information to improve the accuracy of object surface accuracy identification.
In another embodiment, the quality detection specification and the detection item include a second mapping relationship, and the detection item is used for indicating the type of the image on the surface of the object, where the method provided by the embodiment of the application further includes: and acquiring a target detection item corresponding to the surface image of the object to be detected. At this time, step S30, based on the first mapping relationship, determines a target quality detection specification corresponding to the first matching reference object defect image, which may specifically include:
Step S31: determining a plurality of temporary quality detection specifications corresponding to the first matching reference object defect image based on the first mapping relation;
step S32: and determining a target quality detection specification corresponding to the target detection item from the plurality of temporary quality detection specifications based on the second mapping relation.
In this further embodiment, a second mapping relationship exists between the quality inspection specifications and the inspection items, such that each inspection item corresponds to a particular quality inspection specification.
Step S31 requires the computer device to determine a plurality of temporary quality inspection specifications corresponding to the first matching reference object defect image based on the first mapping relationship. The first mapping relationship is a correspondence relationship between the reference object defect image and the quality detection specification. By this relationship, the computer device is able to find all quality detection specifications associated with the first matching reference object defect image, which are considered temporary at this time, as they also need to be subjected to further screening and validation.
For example, assume that the first matching reference object defect image is an electronic product surface image that includes scratches. Based on the first mapping relationship, the computer device may find a plurality of quality detection specifications related to the first mapping relationship, for example, "scratch of electronic product surface must not have a depth exceeding 0.5 mm", "scratch width of electronic product surface must not exceed 2 mm", and so on. These specifications are considered temporary quality inspection specifications because they also require further screening according to subsequent steps. Next, step S32 is performed, which requires the computer device to determine a target quality detection specification corresponding to the target detection item from the plurality of temporary quality detection specifications based on the second mapping relation. The second mapping relationship is a correspondence relationship between the quality detection specification and the detection item as described above. By utilizing this relationship, the computer device can determine which temporary quality inspection specifications match the target inspection item.
Continuing with the example above, if the target detection item is "identify scratches having a depth of more than 0.5 mm on the electronic product surface", the computer device will find the temporary quality detection specification corresponding thereto by the second mapping relationship, that is, "scratch having a depth of more than 0.5 mm on the electronic product surface" is not necessary. This specification is then determined to be the target quality detection specification because it matches the target detection item exactly.
By such steps, the computer device can accurately determine the target quality detection specification corresponding to the surface image of the object to be detected, thereby ensuring that the subsequent detection process can be performed according to the correct specification. The method not only improves the accuracy of detection, but also is helpful to improve the efficiency and reliability of the whole quality detection flow.
In another embodiment, the quality detection specification and the detection item include a second mapping relationship, the detection item is used for indicating the type of the image on the surface of the object, and the method provided by the embodiment of the application further includes: acquiring a target detection item corresponding to a surface image of an object to be detected; and determining a plurality of temporary quality detection specifications corresponding to the target detection items based on the second mapping relation.
Then, based on this, step S20, determining a semantic commonality measurement result between the object surface image to be detected and the first reference object defect image, and determining the first reference object defect image corresponding to the semantic commonality measurement result satisfying the first setting requirement as the first matching reference object defect image may specifically include:
Step S21: determining a first temporary reference object defect image corresponding to the temporary quality detection specification based on the first mapping relation;
Step S22: determining a semantic commonality measurement result between the object surface image to be detected and the first temporary reference object defect image, and determining the first temporary reference object defect image corresponding to the semantic commonality measurement result meeting the first setting requirement as a first matching reference object defect image.
In this embodiment, the quality inspection flow involves a second mapping between quality inspection specifications and inspection items. This mapping allows the detection item to indicate a particular object surface image type, providing explicit guidance for subsequent image matching and detection. On the basis, when the computer equipment performs quality detection, target detection items corresponding to the surface images of the objects to be detected are required to be acquired first, and a plurality of temporary quality detection specifications corresponding to the items are determined through a second mapping relation. These temporary quality inspection specifications serve as reference standards during subsequent image matching processes, helping computer equipment to more accurately identify and evaluate defects on the surface of an object.
Step S20 involves determining a semantic commonality measure between the surface image of the object to be inspected and the first reference object defect image. The measurement result reflects the similarity or commonality of the two semantically, and is an important basis for matching subsequent images. In step S21, the computer apparatus needs to determine a first temporary reference object defect image corresponding to the temporary quality inspection specification based on the first mapping relation. The first mapping relationship is a correspondence relationship between the reference object defect image and the quality detection specification. Through this mapping relationship, the computer device is able to find reference object defect images associated with the temporary quality inspection specifications, which images are referred to as first temporary reference object defect images. These images serve as a basis for comparison in subsequent semantic commonality measures.
Next, step S22 requires the computer device to determine a semantic commonality measure between the object surface image to be detected and the first temporary reference object defect image. This metric may be obtained by comparing the distance, similarity score, or other metrics of the two in the feature space. For example, a deep learning model (e.g., convolutional neural network) may be used to extract feature vectors of an image and calculate a similarity score between the feature vectors as a semantic commonality metric. The first temporary reference object defect image corresponding to the semantic commonality measurement satisfying the first set requirement is determined as the first matching reference object defect image. This first matching reference object defect image will have the highest semantic commonality with the object surface image to be detected and will therefore play an important role in the subsequent object surface image detection.
Based on this, the computer device can accurately determine the first reference object defect image matched with the surface image of the object to be detected, and provides powerful support for the subsequent detection process. The method not only improves the accuracy of detection, but also is helpful to improve the efficiency and reliability of the whole quality detection flow. Meanwhile, by introducing concepts such as mapping relation and semantic commonality measurement, the embodiment also provides a general solution for similar quality detection tasks.
As an embodiment, step S40, according to the target quality detection specification, detects, through the object surface image detection network, an object surface image to be detected, to obtain an object surface image type of the object surface image to be detected, which may specifically include:
step S40a: generating a first annotation proposal sample based on the object surface image to be detected and the target quality detection specification, wherein the first annotation proposal sample is used for indicating an object surface image detection network to generate a corresponding image position of an object surface image type for obtaining the object surface image to be detected;
step S40b: and detecting the object surface image to be detected through an object surface image detection network based on the first annotation proposal sample to obtain the object surface image type and the corresponding image position of the object surface image to be detected.
In step S40a, the computer device generates a first annotation proposal sample based on the surface image of the object to be detected and the target quality detection specification. Provides guidance and reference for subsequent image detection. Specifically, the computer device analyzes the characteristics of the surface image of the object to be detected, and simultaneously generates a labeling suggestion sample in combination with the requirements of the target quality detection specification. The purpose of this sample is to indicate to the object surface image detection network which locations of the image should be of interest when generating the object surface image type. Taking the surface image of the electronic product as an example, if the target quality detection specification focuses on scratch defects, the first labeling advice sample may label the areas of suspected scratches in the image so that the detection network can more accurately identify the areas.
In step S40b, the computer device detects an object surface image to be detected through the object surface image detection network based on the first annotation proposal sample. This detection network may be a deep learning model, such as a Convolutional Neural Network (CNN), which has been trained to identify and classify different types of object surface images. By inputting the image to be detected and the annotation suggestion sample, the detection network can more accurately locate the key region in the image and generate the corresponding object surface image type and the image position thereof. Taking the example of continuing the above image of the surface of the electronic product, the detection network may output an image with a bounding box, where the bounding box marks the location of the scratch and is accompanied by a scratch type tag.
Thus, by the cooperation of step S40a and step S40b, the computer apparatus is able to determine not only the object surface image type of the object surface image to be detected, but also accurately indicate the specific positions of these types in the image. This is of great significance for subsequent quality assessment, defect localization and other tasks. At the same time, this embodiment also demonstrates how advanced machine learning techniques can be combined with quality detection procedures to improve the accuracy and efficiency of the detection.
In some embodiments, if the number of the object surface images to be detected is greater than the set critical number, step S40, according to the target quality detection specification, detects the object surface images to be detected through the object surface image detection network to obtain the object surface image type of the object surface images to be detected, may specifically include:
Step S401: generating a second annotation proposal sample based on the object surface image to be detected and the target quality detection specification, wherein the second annotation proposal sample is used for indicating an object surface image detection network positioning proposal area, and the proposal area is used for indicating the position of an object surface image area meeting the target quality detection specification in the object surface image to be detected;
Step S402: and detecting the object surface image to be detected through the object surface image detection network based on the second annotation proposal sample to obtain the object surface image type and the proposal area of the object surface image to be detected.
In some embodiments, the execution manner of step S40 may be different when the number of the surface images of the object to be detected exceeds a set threshold number. This is because a larger number of images require more efficient and accurate processing methods. In this case, in step S401, the computer device generates a second annotation proposal sample based on the surface image of the object to be detected and the target quality detection specification. Unlike the previous annotation proposal sample, the second annotation proposal sample herein is mainly used to instruct the object surface image detection network to locate a so-called "proposal area". These suggested regions are actually the locations of specific regions in the image that may meet the target quality detection specification. For example, if the target quality detection criteria is to find a knot defect on the surface of the electronic product, the suggested area may be a location in the image where a knot may occur.
To achieve this, the computer device may employ various image analysis and processing techniques, such as edge detection, texture analysis, etc., to identify regions in the image that may meet the target quality detection specification. These regions are then labeled to form a second labeled suggestion sample. In step S402, the computer apparatus detects an object surface image to be detected using the object surface image detection network. Unlike before, this detection is of particular concern to the suggested area determined in step S401. The detection network may perform more detailed scanning and analysis in these areas to find the exact object surface image type. Finally, the detection network will not only output the object surface image types of the image, but also indicate the specific locations of these types in the proposed area.
For example, assuming a large number of images of the surface of an electronic product to be inspected, the target quality inspection specification is to identify knots and scratches on the surface of the electronic product. In step S401, the computer device may initially determine an area that may contain knots and scratches by analyzing characteristics such as texture and color of the image, and generate a second annotation proposal sample. Then in step S402, the object surface image detection network focuses on these suggested areas and outputs the specific positions and types of knots and scratches in each area.
Through such a procedure, the computer device can maintain high efficiency and accuracy in processing a large number of images while ensuring that the detection result meets the requirements of the target quality detection specification. The method has high practicability and flexibility in practical application, and can be adjusted and optimized according to different detection requirements and image characteristics.
The embodiment of the present application is not particularly limited to a manner of identifying the type of the object surface image to be detected by the object surface image detection network according to the target quality detection specification and the second matching reference object defect image, and is exemplified by the number of the second matching reference object defect images being plural. Since the quality inspection specifications contained in the second reference object defect image may be different under different inspection items, the present application may construct a mapping relationship between the second reference object defect image and the inspection items, that is, a third mapping relationship, in advance. Based on the third mapping relation, candidate detection items corresponding to the plurality of second matching reference object defect images respectively can be obtained, a plurality of candidate detection items are obtained, and then the target detection item corresponding to the target quality detection standard is determined according to the second mapping relation constructed by the quality detection standard and the detection items. Because the probability that the object surface image to be detected is applicable to the target quality detection specification is high, the probability that the detection item corresponding to the object surface image to be detected is the target detection item corresponding to the target quality detection specification is high, then the detection item identical to the target detection item can be determined in a plurality of candidate detection items so as to determine the target matching reference object defect image in a plurality of second matching reference object defect images, the candidate detection item corresponding to the target matching reference object defect image is identical to the target detection item, and the object surface image to be detected is detected through the object surface image detection network according to the target quality detection specification and the target matching reference object defect image, so that the object surface image type of the object surface image to be detected is obtained.
Therefore, the object surface image detection network not only can learn and understand the second reference object defect image on the basis of the target quality detection standard and improve the learning effect of the object surface image type which is not well recognized, but also can increase the learning effect on the premise of the same detection item by taking the second matching reference object defect image which belongs to the same detection item, namely the target matching reference object defect image, as the perfect quality detection standard of the target quality detection standard, so that the object surface image detection network can improve the precision.
Optionally, in order to increase generalization of the object surface image detection network and increase accuracy of the object surface image detection network in identifying the object surface image types, the quality detection specification may be continuously increased, and the number of the quality detection specification, the first reference object defect image and the second reference object defect image is correspondingly continuously increased. Since the expressive power of the reference characterization vector has an upper limit, the contents of all the reference object defect images cannot be guaranteed. At this time, the embodiment of the present application expresses each reference object defect image as a reference characterization vector, that is, the feature information of the first reference object defect image and/or the second reference object defect image is mined in advance. For example, a reference object defect image is obtained, the reference object defect image is a first reference object defect image or a second reference object defect image, and characteristic information of the reference object defect image is mined to obtain a reference characterization vector; the reference token vector is saved. Correspondingly, the determining mode of the semantic commonality measurement result is to mine the characteristic information of the surface image of the object to be detected, obtain the characterization vector to be detected, determine the semantic commonality measurement result between the characterization vector to be detected and the reference characterization vector, and facilitate the calculation of the subsequent semantic commonality measurement result.
For example, for the first reference object defect image, after the first reference object defect image is obtained, feature information of the first reference object defect image is mined to obtain a first reference characterization vector, and the first reference characterization vector is stored, so that the method can be directly applied when a semantic commonality measurement result is obtained, that is, after the surface image of the object to be detected is obtained, feature information of the surface image of the object to be detected is mined to obtain a to-be-detected characterization vector, and a semantic commonality measurement result between the to-be-detected characterization vector and the first reference characterization vector is determined. For the second reference object defect image, after the second reference object defect image is obtained, feature information of the second reference object defect image is mined to obtain a second reference characterization vector, and the second reference characterization vector is stored, so that the method can be directly applied when a semantic commonality measurement result is determined, namely, after the surface image of the object to be detected is obtained, feature information of the surface image of the object to be detected is mined to obtain a feature vector to be detected, and the semantic commonality measurement result between the feature vector to be detected and the second reference characterization vector is determined.
In one embodiment, if the reference object defect image is a second reference object defect image and the reference characterization vector is a second reference characterization vector, then mining feature information of the reference object defect image to obtain the reference characterization vector, including: grouping a plurality of second reference object defect images to obtain a plurality of reference object defect image sets, wherein the second reference object defect images included in each reference object defect image set represent the same quality detection specification; and excavating characteristic information of each reference object defect image set to obtain a plurality of second reference characterization vectors.
In this embodiment, the plurality of second reference object defect images are grouped (i.e., clustered) with the purpose of merging images having similar features or following the same quality detection criteria into the same group, forming a plurality of reference object defect image sets. The images within each image set are highly consistent in visual or quality detection criteria. For example, in an electronic product quality inspection scenario, all images showing a node defect may be grouped into one type, while images showing a scratch defect may be grouped into another type. After the clustering is completed, the computer can mine characteristic information for each reference object defect image set. This step typically involves a combination of image processing and machine learning techniques to extract key features in the image set, such as color, texture, shape, etc. These feature information are then encoded into a digital form, forming a second reference token vector. Each token vector is a high abstraction and generalization of the features of the corresponding image set, which form the basis for subsequent quality detection and analysis. For example, if a set of reference object defect images includes a series of images showing knots on the surface of the electronic product, then by feature mining, the computer may extract features such as shade of color, roughness of texture, etc. common to the images and convert the features into a specific second reference characterization vector. This vector will be used as an important reference in the subsequent detection process to determine whether the new image contains a similar node defect. By combining clustering and feature mining, key feature information is effectively extracted from a large number of second reference object defect images, and powerful support is provided for subsequent automatic quality detection.
The training method of the object surface image detection network is introduced as follows, and specifically comprises the following steps:
Step S1: an object surface image sample and a quality detection standard sample corresponding to the object surface image sample are obtained, wherein the object surface image sample comprises a priori mark for indicating the type of the object surface image corresponding to the object surface image sample.
Step S1 involves acquiring critical training data, namely an object surface image sample and a quality detection specification sample corresponding thereto. These data not only provide the basis for training, but also ensure that the final trained network can accurately identify the image type of the object surface according to established quality detection criteria.
In particular, the computer device collects a large number of object surface image samples from various sources. These examples may be from real-time shot, historically archived image data on a production line, or synthetic images generated specifically for this training purpose. Importantly, each image sample must be accompanied by an a priori mark that explicitly indicates the type of image of the object surface to which the image corresponds. For example, in the electronics processing industry, a priori marks may include "no defects", "knots", "scratches", "color irregularities", etc., which are defined based on quality detection criteria recognized by the electronics industry.
At the same time, the computer device also needs to acquire quality detection specification samples corresponding to the image samples. These canonical examples may be text descriptions, numerical parameters, schematic images, etc., which together define what object surface images are considered to be quality compliant or non-compliant. During the training process, these canonical examples will be used to instruct the network to learn how to identify different types of object surface images according to established criteria.
For example, if an image sample reveals a surface of an electronic product with significant knots, then its a priori signature may be "knots". The corresponding quality inspection standard examples may detail the size, shape, color, etc. characteristics of the knots and their acceptable level on the surface of the electronic product. This information will be encoded in the form of numbers or vectors so that the neural network can process and learn.
Step S1 provides a comprehensive, accurate and marked clear data set for training of an object surface image detection network. Through careful preparation in this step, the trained network can be ensured to have strong generalization capability and high accuracy, so that the image types of various object surfaces can be reliably identified in practical application.
Step S2: and fusing the object surface image sample, the quality detection standard sample and the guide variable to obtain object surface image input data.
Step S2 is responsible for fusing various information sources into a unified input for the neural network to learn in the training of the object surface image detection network.
Specifically, the computer device fuses the object surface image sample, the quality detection specification sample, and the guide variable. Object surface image samples provide visual representations of target data that the network needs to learn, including various types of surface defects or features. The quality detection standard sample can be an image or other standard description which indicates the detection standard, and provides a basis for judging the image quality for the network. The guiding variable is equivalent to a template, and guides the network how to generate the output conforming to the specific format and structure.
In the fusion process, the computer equipment can adopt image processing technology, such as image superposition, feature fusion or feature extraction based on deep learning, and the like. For example, a Convolutional Neural Network (CNN) may be used to extract features of an object surface image sample and compare and combine these features with standard features in a quality detection specification sample. Meanwhile, the guiding variable can be subjected to pixel-by-pixel operation or feature space transformation with the object surface image sample so as to ensure that the output image not only retains original information, but also meets specific output requirements. As a specific example, assume that the object surface image sample is a photograph of the surface of an electronic product with scratches and the quality inspection specification sample is a text or image standard describing the length and width limitations of the scratches. The guiding variable may be a binary image that marks the scratch area (i.e., the scratch area is white and the rest is black). At the time of fusion, the computer device first extracts scratch features, such as length, width, color, etc., in the surface photograph of the electronic product using CNN. Then, the characteristics are compared with standard characteristics in a quality detection standard sample, so that the extracted scratch characteristics are ensured to meet the standard requirements. Finally, the network generates an image which contains the original scratch information and meets the requirement of the output format through the guidance of the guiding variable, and the image is used as the input of the next training. And step S2, rich and accurate learning materials are provided for the object surface image detection network by fusing various information sources, so that the network is helped to better understand and simulate the quality detection process.
Step S3: and detecting the object surface image sample through an initial object surface image detection network based on the object surface image input data to obtain the predicted object surface image type of the object surface image sample.
In step S3, the computer device uses the already constructed initial object surface image detection network, which may be a deep learning model, such as a Convolutional Neural Network (CNN) or a more complex structure, such as a target detection algorithm, such as a residual network (ResNet) or YOLO (You Only Look Once). These models are designed to receive an image as input and extract features from the image through a series of convolutions, pooling, and activation functions. Specifically, the computer device communicates the object surface image input data generated in step S2 to the initial object surface image detection network. The network processes these images layer by layer, progressively extracting higher level, more abstract feature representations starting from the low level pixel features. These features may include edges, textures, color distributions, etc., which together constitute the surface characteristics of the objects in the image.
At the last layers of the network, these features are combined and converted into a prediction output, i.e. the type of object surface image. This type may be one of predefined good categories, such as "no defect", "scratch", "stain", etc., and may also be determined based on the position of the feature vector in continuous space.
To illustrate this process more specifically, it may be assumed that the initial object surface image detection network is a CNN-based classifier. When an image of the surface of an electronic product is input, the CNN extracts edge and texture features in the image through the convolution layer, and then reduces the dimension and complexity of the features through the pooling layer. These features are then passed to the full connection layer, which ultimately outputs a probability distribution representing the likelihood that the image belongs to each of the predefined categories.
It should be noted that the initial object surface image detection network used in step S3 is not optimized at the beginning of training, and its prediction result may not be accurate. However, the purpose of this step is not to obtain perfect predictions, but to provide a starting point for subsequent network parameter corrections. By comparing the differences between the predicted results and the true labels of the network, a loss function (e.g., cross entropy loss) can be calculated, thereby guiding the training and optimization process of the network.
Step S4: and correcting the neural network parameters corresponding to the guide variables based on the errors between the predicted object surface image type and the prior marks of the object surface image samples to obtain an object surface image detection network.
Specifically, the computer device first calculates the error between the predicted object surface image type obtained in step S3 and the actual a priori mark. This error can be measured in a number of ways, such as using a cross entropy loss function to calculate the difference between the predicted probability distribution and the actual marker probability distribution, or using a mean square error to measure the degree of deviation between the predicted value and the actual value. The magnitude of the error reflects the accuracy of the current prediction of the network, and the larger the error is, the worse the prediction capability of the network is, and more optimization is needed. Once the error is calculated, the computer device uses this error information to correct the neural network parameters corresponding to the pilot variables. This process is typically implemented by a back-propagation algorithm. The back propagation algorithm returns error information layer by layer according to the size and direction of the error, and correspondingly adjusts the parameters of each layer. The adjustment mode can be a gradient descent method, a random gradient descent method, an Adam and other optimization algorithms, and the aim is to gradually reduce errors in an iterative mode so that the prediction result of the network is more similar to an actual mark.
For example, if the initial object surface image detection network erroneously classifies an electronic product surface image with scratches as "defect-free" when predicting it, then in step S4, the computer calculates the error of this erroneous classification and adjusts the parameters of the network by a back-propagation algorithm. The result of the adjustment may be to enable the network to more accurately classify similar scratched images as "scratches" the next time they are encountered. It should be noted that the parameter correction in step S4 is an iterative process, and needs to be repeated multiple times until the predicted performance of the network reaches a certain requirement or no longer has a significant improvement. Each iteration will fine tune the network parameters according to the current error information, thereby gradually optimizing the performance of the network.
In the prior art, network parameters of an object surface image detection network including self-correlation semantic understanding performance are usually of a larger magnitude, and this requires higher training cost, so as to reduce the number of bases of adjustment of the parameters of the neural network, thereby increasing training speed, and in other words, the embodiment of the application optimizes only part of parameters of the object surface image detection network, in other words, optimizes details of a pre-training network (the object surface image detection network), and in particular, acquires an object surface image sample and a quality detection standard sample corresponding to the object surface image sample; combining the object surface image sample, the quality detection standard sample and the guide variable to obtain object surface image input data; detecting an object surface image sample through an initial object surface image detection network based on object surface image input data to obtain a predicted object surface image type of the object surface image sample; and correcting the neural network parameters corresponding to the guide variables based on the errors between the predicted object surface image type and the prior marks of the object surface image samples to obtain an object surface image detection network.
The initial object surface image detection network is a neural network with parameters needing to be debugged, and the object surface image detection network is a neural network with parameters needing to be debugged. The object surface image samples are object surface images for debugging the initial object surface image detection network, each object surface image sample comprises a priori mark for indicating the type of the object surface image corresponding to the object surface image sample, and a corresponding quality detection standard sample, wherein the quality detection standard sample is a quality detection standard applicable to the object surface image sample. Then, a guiding variable is constructed, and as mentioned above, the guiding variable is used for guiding the initial object surface image detection network to generate an output conforming to a specific format and structure, so that corresponding network parameters can be conveniently and directionally optimized, and the guiding variable can refer to the concept of prefix tuning, and in a specific implementation process, the guiding variable can be a feature map (or a feature vector), and a learnable parameter or module is introduced in a training stage of the initial object surface image detection network to simulate the effect of the prefix. One possible way of application is to introduce a learnable parameter in the feature extraction stage of the initial object surface image detection network. In particular, a small neural network or convolution layer may be used to generate a task-dependent feature vector, which is then fused with the intermediate layer features of the initial object surface image detection network. In this way, the feature vector may act like a prefix, directing the model to focus on information related to the task. During training, the parameters of this feature vector can be optimized by a back-propagation algorithm to better adapt to downstream tasks. Another possible application is to introduce a learnable parameter at the output stage of the initial object surface image detection network. In particular, a full join layer or convolution layer may be added after the last convolution layer of the model to generate task-dependent outputs. The weights of this full-join or convolutional layer can be regarded as prefix-like parameters, which can be optimized by training to enable the model to adapt better to downstream tasks. Unlike the former method, this method introduces a learnable parameter directly at the output stage of the model, and can control the output of the model more directly.
Therefore, the object surface image sample, the quality detection standard sample and the guide variable are fused, the obtained object surface image input data is loaded to an initial object surface image detection network, and the initial object surface image detection network can be an improved convolutional neural network, for example, parameters corresponding to the guide variable are added to a feature extraction layer of the convolutional neural network. And correcting partial parameters (parameters corresponding to the guiding variables) of the initial object surface image detection network. According to the input data of the object surface image, outputting the type of the predicted object surface image through an initial object surface image detection network, and correcting the neural network parameters corresponding to the guiding variables according to the type of the predicted object surface image and the errors of the corresponding prior marks to obtain the object surface image detection network.
Of course, in other possible implementations, the training samples may be obtained without adding the guiding variables, that is, the object surface image input data is data only including the fused object surface image samples and the quality detection standard samples, and the network is debugged by loading the object surface image input data into the initial object surface image detection network.
As an embodiment, step S2, fusing the object surface image sample and the guiding variable to obtain object surface image input data may specifically include:
step S21: combining the object surface image sample, the quality detection standard sample and the guide variable to obtain a combined object surface image;
step S22 obtains object surface image input data based on the combined object surface image and a second reference object defect image, which is an object surface image that produces false detection of the type of object surface image.
In step S21, the computer apparatus combines the object surface image sample, the quality detection specification sample, and the guide variable to obtain a combined object surface image. The combination here is not a simple image superposition, but rather the three types of image information are effectively fused together by specific image processing techniques, such as image stitching, feature fusion, etc. The object surface image sample provides original object surface image data, the quality detection standard sample provides standard or standard for the object surface quality in the image, and the guiding variable plays a role of a template or guide to help the network to better understand and identify key information in the image.
In particular, the computer device may first pre-process the object surface image samples, such as resizing the image, normalizing pixel values, etc., to match other sources of image information. Then, standard or normative information in the quality detection normative sample is merged into the object surface image sample in the modes of image marking, feature extraction and the like. Finally, the computer equipment can further adjust and optimize the fused image through the guidance of the guiding variable, so that the generated surface image of the combined object not only contains the information of the original image, but also meets the requirements of quality detection standards.
Next, step S22 obtains object surface image input data based on the combined object surface image and the second reference object defect image. The second reference object defect image refers to those object surface images that are prone to false detection during the web training process. By introducing these challenging images, step S22 aims to enhance the robustness and generalization capability of the network. In this step, the computer device may transform the composite object surface image using image enhancement techniques, such as rotation, scaling, translation, etc., to simulate various conditions that may occur during actual inspection. At the same time, it will also incorporate the features or information of the second reference object defect image into the composite object surface image in some way, for example by means of feature fusion, image blending, etc. The input data of the object surface image obtained in this way not only contains the information of the original image and the requirements of quality detection standards, but also integrates the characteristics or modes possibly causing false detection, thereby helping the network to learn and adapt to various complex detection scenes better. By effectively fusing multiple image information sources and introducing a challenging second reference object defect image, a more rich and diverse input data is provided for training of the object surface image detection network. The method is not only helpful for improving the performance and accuracy of the network, but also can better cope with various complex detection tasks in practical application.
It should be noted that, in the embodiment of the present application, if the above-mentioned object surface accuracy recognition method based on machine vision is implemented in the form of a software function module, and sold or used as a separate product, the object surface accuracy recognition method may also be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or some of contributing to the related art may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the application are not limited to any specific hardware, software, or firmware, or any combination of hardware, software, and firmware.
The embodiment of the application provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes part or all of the steps in the method when executing the program.
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs some or all of the steps of the above-described method. The computer readable storage medium may be transitory or non-transitory.
Embodiments of the present application provide a computer program comprising computer readable code which, when run in a computer device, causes a processor in the computer device to perform some or all of the steps for carrying out the above method.
Embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program which, when read and executed by a computer, performs some or all of the steps of the above-described method. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, and in other embodiments, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It should be noted here that: the above description of various embodiments is intended to emphasize the differences between the various embodiments, the same or similar features being referred to each other. The above description of apparatus, storage medium, computer program and computer program product embodiments is similar to that of method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus, the storage medium, the computer program and the computer program product of the present application, reference should be made to the description of the embodiments of the method of the present application.
Fig. 2 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application, as shown in fig. 2, the hardware entity of the computer device 1000 includes: a processor 1001 and a memory 1002, wherein the memory 1002 stores a computer program executable on the processor 1001, the processor 1001 implementing the steps in the method of any of the embodiments described above when the program is executed.
The memory 1002 stores computer programs executable on the processor, the memory 1002 being configured to store instructions and applications executable by the processor 1001, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the respective modules in the processor 1001 and the computer device 1000, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
The processor 1001 performs the steps of the machine vision-based object surface accuracy recognition method of any one of the above described steps when executing a program. The processor 1001 generally controls the overall operation of the computer device 1000.
Embodiments of the present application provide a computer storage medium storing one or more programs executable by one or more processors to implement the steps of the machine vision-based object surface accuracy recognition method of any of the embodiments above.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application. The Processor may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not limited in detail.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Read Only optical disk (Compact Disc Read-Only Memory, CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence number of each step/process described above does not mean that the execution sequence of each step/process should be determined by its functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Or the above-described integrated units of the application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A machine vision-based object surface accuracy recognition method, the method comprising:
Acquiring a surface image of an object to be detected;
Determining a semantic commonality measurement result between the object surface image to be detected and a first reference object defect image, and determining a first reference object defect image corresponding to the semantic commonality measurement result meeting a first setting requirement as a first matching reference object defect image, wherein the first reference object defect image and a quality detection specification comprise a first mapping relation, the quality detection specification is used for indicating characteristic information of an object surface image type corresponding to the object surface image, and the first reference object defect image is an object surface image meeting the quality detection specification comprising the first mapping relation;
Determining a target quality detection specification corresponding to the first matching reference object defect image based on the first mapping relation;
Detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification to obtain an object surface image type of the object surface image to be detected, wherein the object surface image detection network is a neural network with image front-back autocorrelation information understanding performance;
the method further comprises the steps of:
Determining a semantic commonality measurement result between the object surface image to be detected and a second reference object defect image, and determining a second reference object defect image corresponding to the semantic commonality measurement result meeting a second setting requirement as a second matching reference object defect image, wherein the second reference object defect image is an object surface image for generating object surface image type false detection;
Detecting the object surface image to be detected through an object surface image detection network according to the target quality detection specification to obtain an object surface image type of the object surface image to be detected, including:
Detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the second matching reference object defect image to obtain an object surface image type of the object surface image to be detected;
The quality detection specification and the detection item comprise a second mapping relation, the detection item is used for indicating the type of the object surface image, and the second reference object defect image and the detection item comprise a third mapping relation;
if the number of the second matching reference object defect images is a plurality of, detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the second matching reference object defect images to obtain an object surface image type of the object surface image to be detected, wherein the method comprises the following steps:
Determining a target detection item corresponding to the target quality detection specification based on the second mapping relation;
Determining candidate detection items respectively corresponding to a plurality of second matching reference object defect images based on the third mapping relation;
Determining a target matching reference object defect image from a plurality of second matching reference object defect images based on the target detection item and a plurality of candidate detection items, wherein the candidate detection items corresponding to the target matching reference object defect image are identical to the target detection item;
And detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification and the target matching reference object defect image to obtain the object surface image type of the object surface image to be detected.
2. The method of claim 1, wherein the quality inspection specification includes a second mapping relationship with an inspection item, the inspection item to indicate an object surface image type, the method further comprising:
Acquiring a target detection item corresponding to the surface image of the object to be detected; the determining, based on the first mapping relationship, a target quality detection specification corresponding to the first matching reference object defect image includes:
Determining a plurality of temporary quality detection specifications corresponding to the first matching reference object defect image based on the first mapping relation;
and determining a target quality detection specification corresponding to the target detection item from a plurality of temporary quality detection specifications based on the second mapping relation.
3. The method of claim 1, wherein the quality inspection specification includes a second mapping relationship with an inspection item, the inspection item to indicate an object surface image type, the method further comprising:
acquiring a target detection item corresponding to the surface image of the object to be detected;
determining a plurality of temporary quality detection specifications corresponding to the target detection items based on the second mapping relation;
the determining a semantic commonality measurement result between the object surface image to be detected and the first reference object defect image, and determining the first reference object defect image corresponding to the semantic commonality measurement result meeting the first setting requirement as a first matching reference object defect image, includes:
Determining a first temporary reference object defect image corresponding to the temporary quality detection specification based on the first mapping relation;
Determining a semantic commonality measurement result between the object surface image to be detected and the first temporary reference object defect image, and determining the first temporary reference object defect image corresponding to the semantic commonality measurement result meeting the first setting requirement as a first matching reference object defect image.
4. The method according to claim 1, wherein the detecting the object surface image to be detected through the object surface image detection network according to the target quality detection specification, to obtain an object surface image type of the object surface image to be detected, includes:
Generating a first annotation proposal sample based on the object surface image to be detected and the target quality detection specification, wherein the first annotation proposal sample is used for indicating the object surface image detection network to generate a corresponding image position of an object surface image type for obtaining the object surface image to be detected;
and detecting the object surface image to be detected through the object surface image detection network based on the first annotation proposal sample to obtain the object surface image type and the corresponding image position of the object surface image to be detected.
5. The method according to claim 1, wherein if the number of the object surface images to be detected is greater than a set critical number, the detecting the object surface image by the object surface image detection network according to the target quality detection specification, to obtain an object surface image type of the object surface image to be detected, includes:
Generating a second annotation proposal sample based on the object surface image to be detected and the target quality detection specification, wherein the second annotation proposal sample is used for indicating a network positioning proposal area for detecting the object surface image, and the proposal area is used for indicating the position of an object surface image area meeting the target quality detection specification in the object surface image to be detected;
and detecting the object surface image to be detected through the object surface image detection network based on the second annotation proposal sample to obtain the object surface image type and the proposal area of the object surface image to be detected.
6. The method according to claim 1, wherein the method further comprises:
acquiring a reference object defect image, wherein the reference object defect image is the first reference object defect image or the second reference object defect image;
Excavating characteristic information of the reference object defect image to obtain a reference characterization vector, and storing the reference characterization vector;
The semantic commonality measurement result is determined in the following manner:
excavating characteristic information of the surface image of the object to be detected to obtain a characterization vector to be detected;
and determining a semantic commonality measurement result between the to-be-detected characterization vector and the reference characterization vector.
7. The method of claim 6, wherein if the reference object defect image is the second reference object defect image and the reference characterization vector is a second reference characterization vector, mining feature information of the reference object defect image to obtain a reference characterization vector, comprising:
grouping the plurality of second reference object defect images to obtain a plurality of reference object defect image sets, wherein the second reference object defect images included in each reference object defect image set represent the same quality detection specification;
And excavating characteristic information of each reference object defect image set to obtain a plurality of second reference characterization vectors.
8. The method according to any one of claims 1 to 7, wherein the training method of the object surface image detection network is as follows:
acquiring an object surface image sample and a quality detection standard sample corresponding to the object surface image sample, wherein the object surface image sample comprises an priori mark for indicating the type of the object surface image corresponding to the object surface image sample;
Fusing the object surface image sample, the quality detection standard sample and the guide variable to obtain object surface image input data;
Detecting the object surface image sample through an initial object surface image detection network based on the object surface image input data to obtain a predicted object surface image type of the object surface image sample;
And correcting the neural network parameters corresponding to the guide variables based on the errors between the type of the predicted object surface image and the prior marks of the object surface image sample to obtain the object surface image detection network.
9. The method of claim 8, wherein fusing the object surface image sample, the quality detection specification sample, and the guide variable to obtain object surface image input data comprises:
Combining the object surface image sample, the quality detection standard sample and the guide variable to obtain a combined object surface image;
And obtaining object surface image input data based on the combined object surface image and the second reference object defect image, wherein the second reference object defect image is an object surface image which generates false detection of the object surface image type.
10. A computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1 to 9 when the computer program is executed.
CN202410260380.6A 2024-03-07 2024-03-07 Object surface precision identification method based on machine vision and related equipment Active CN117853826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410260380.6A CN117853826B (en) 2024-03-07 2024-03-07 Object surface precision identification method based on machine vision and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410260380.6A CN117853826B (en) 2024-03-07 2024-03-07 Object surface precision identification method based on machine vision and related equipment

Publications (2)

Publication Number Publication Date
CN117853826A CN117853826A (en) 2024-04-09
CN117853826B true CN117853826B (en) 2024-05-10

Family

ID=90534985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410260380.6A Active CN117853826B (en) 2024-03-07 2024-03-07 Object surface precision identification method based on machine vision and related equipment

Country Status (1)

Country Link
CN (1) CN117853826B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179251A (en) * 2019-12-30 2020-05-19 上海交通大学 Defect detection system and method based on twin neural network and by utilizing template comparison
CN112304960A (en) * 2020-12-30 2021-02-02 中国人民解放军国防科技大学 High-resolution image object surface defect detection method based on deep learning
CN113902940A (en) * 2021-07-26 2022-01-07 惠州学院 Neural network-based multi-class article visual identification method and metering equipment
CN116977257A (en) * 2023-03-06 2023-10-31 腾讯科技(深圳)有限公司 Defect detection method, device, electronic apparatus, storage medium, and program product
CN117173172A (en) * 2023-11-02 2023-12-05 深圳市富邦新材科技有限公司 Machine vision-based silica gel molding effect detection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179251A (en) * 2019-12-30 2020-05-19 上海交通大学 Defect detection system and method based on twin neural network and by utilizing template comparison
CN112304960A (en) * 2020-12-30 2021-02-02 中国人民解放军国防科技大学 High-resolution image object surface defect detection method based on deep learning
CN113902940A (en) * 2021-07-26 2022-01-07 惠州学院 Neural network-based multi-class article visual identification method and metering equipment
CN116977257A (en) * 2023-03-06 2023-10-31 腾讯科技(深圳)有限公司 Defect detection method, device, electronic apparatus, storage medium, and program product
CN117173172A (en) * 2023-11-02 2023-12-05 深圳市富邦新材科技有限公司 Machine vision-based silica gel molding effect detection method and system

Also Published As

Publication number Publication date
CN117853826A (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN107358596B (en) Vehicle loss assessment method and device based on image, electronic equipment and system
JP5546317B2 (en) Visual inspection device, visual inspection discriminator generation device, visual inspection discriminator generation method, and visual inspection discriminator generation computer program
US20190139212A1 (en) Inspection apparatus, data generation apparatus, data generation method, and data generation program
KR102103853B1 (en) Defect inspection device and defect inspection method
TW201419169A (en) Object discrimination device, object discrimination method, and program
JP2013167596A (en) Defect inspection device, defect inspection method, and program
CN111583180B (en) Image tampering identification method and device, computer equipment and storage medium
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN117173172B (en) Machine vision-based silica gel molding effect detection method and system
CN111325265B (en) Detection method and device for tampered image
CN114494780A (en) Semi-supervised industrial defect detection method and system based on feature comparison
CN111758117A (en) Inspection system, recognition system, and learning data generation device
CN115131596A (en) Defect classification device, method, and program
CN112171057A (en) Quality detection method and device based on laser welding and storage medium
CN116596875A (en) Wafer defect detection method and device, electronic equipment and storage medium
CN114299040A (en) Ceramic tile flaw detection method and device and electronic equipment
US11727052B2 (en) Inspection systems and methods including image retrieval module
JP2006292615A (en) Visual examination apparatus, visual inspection method, program for making computer function as visual inspection apparatus, and recording medium
CN117853826B (en) Object surface precision identification method based on machine vision and related equipment
KR20190119801A (en) Vehicle Headlight Alignment Calibration and Classification, Inspection of Vehicle Headlight Defects
JP2023104424A (en) Estimation device, estimation method, and program
CN111046878B (en) Data processing method and device, computer storage medium and computer
JP6175904B2 (en) Verification target extraction system, verification target extraction method, verification target extraction program
CN118096747B (en) Automatic PCBA (printed circuit board assembly) board detection method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant