CN114627092A - Defect detection method and device, electronic equipment and readable storage medium - Google Patents

Defect detection method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114627092A
CN114627092A CN202210291632.2A CN202210291632A CN114627092A CN 114627092 A CN114627092 A CN 114627092A CN 202210291632 A CN202210291632 A CN 202210291632A CN 114627092 A CN114627092 A CN 114627092A
Authority
CN
China
Prior art keywords
image
detected
defect
area
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210291632.2A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Lyric Robot Automation Co Ltd
Original Assignee
Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Lyric Robot Intelligent Automation Co Ltd filed Critical Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority to CN202210291632.2A priority Critical patent/CN114627092A/en
Publication of CN114627092A publication Critical patent/CN114627092A/en
Priority to PCT/CN2022/140036 priority patent/WO2023179122A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a defect detection method, a defect detection device, an electronic device and a readable storage medium, wherein the defect detection method comprises the following steps: extracting the outline of the object to be detected from the image to be processed to obtain a defect map of the object to be detected; extracting a defect outline of the defect map to obtain the defect area of the object to be detected; and comparing the defect area with a first area threshold value, and determining that the object to be detected has defects when the defect area is larger than the first area threshold value. This application carries out the profile extraction through treating the image of measuring the thing, can acquire and treat that the thing body profile and the defect profile are waited to measure, further carries out the profile extraction to this defect profile again, obtains the profile area of defect, compares the profile area of defect with the area threshold value that has set for after, alright in order to determine to detect the thing and have the defect. Therefore, the defect detection of the target object is realized, the working intensity of artificial detection is reduced, and meanwhile, compared with the artificial detection, the defect detection is not influenced by subjective emotion, and the detection accuracy is also enhanced.

Description

Defect detection method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of quality inspection, and in particular, to a defect detection method, apparatus, electronic device, and readable storage medium.
Background
With the development of intellectualization and electronization, the application and development of die cutting materials are also rapidly developed. Based on the rapid development of die-cutting modules and the important role of die-cutting materials in various fields, the detection of the die-cutting materials becomes very important, and the accuracy requirement of the detection of the module materials is higher and higher. However, the detection of the die-cutting material is mainly based on manual detection at present, and is greatly influenced by subjective factors.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a defect detection method, a defect detection apparatus, an electronic device and a readable storage medium. The image to be processed can be processed to realize the detection of the die-cut material.
In a first aspect, an embodiment of the present application provides a defect detection method, including: extracting the outline of the object to be detected from the image to be processed to obtain a defect map of the object to be detected; extracting a defect outline of the defect map to obtain the defect area of the object to be detected; and comparing the defect area with a first area threshold value, and determining that the object to be detected has defects when the defect area is larger than the first area threshold value.
In the implementation process, the outline of the body and the defect of the object to be detected can be obtained by extracting the outline of the image to be processed, and then the defect map is obtained. And further extracting the outline of the defect on the basis of the defect map and obtaining the outline area of the defect, comparing the outline area of the defect with a first area threshold value, and determining that the object to be detected has the defect when the outline area of the defect is larger than the first area threshold value. The outline of the object to be detected is determined by comparing the outline with the set area threshold, so that on one hand, the large defects can be effectively detected, on the other hand, the small defects can be ignored, and accurate and effective defect identification is realized. And the whole process of defect identification is processed based on the image of the object to be detected, so that the detection automation is realized, the detection efficiency is improved, and the working intensity of workers is reduced.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where: the method for extracting the outline of the object to be detected from the image to be processed to obtain the defect map comprises the following steps: carrying out binarization processing on the image to be processed to obtain a binarized image; obtaining an effective contour of an object to be detected in the binary image, wherein the effective contour of the object to be detected is a contour which meets a preset condition in the contour of the object to be detected; extracting a target image from the effective contour; and performing convolution calculation on the target image according to a set algorithm to obtain a defect map of the object to be detected.
In the implementation process, the binary image capable of reflecting the overall and local characteristics of the image is obtained by carrying out binary processing on the image to be processed, and the multi-extreme value of the pixel is not involved any more, so that the processing becomes simple. Meanwhile, conditions are set for the contours, the contours meeting the conditions are selected as effective contours, some interference contours are eliminated, and the accuracy of contour acquisition is guaranteed. And then through the target image extracted by the effective contour, the defect map obtained by performing convolution calculation on the target image is more accurate, and the accuracy rate of acquiring the defects is improved.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where: the acquiring of the effective contour of the object to be detected in the binary image comprises the following steps: extracting a contour set of an object to be detected in the binarized image, wherein the contour set of the object to be detected comprises one or more contours of the object to be detected; and rejecting the contour of the object to be detected which does not meet the preset condition in the contour set of the object to be detected to obtain the effective contour of the object to be detected.
In the implementation process, in the actual contour extraction process, some contours of some non-to-be-detected object bodies may be extracted, and by comparing the area of each contour in the image with a preset condition, the contour with the contour area smaller than the preset condition is regarded as an interference contour (i.e., the contour of the non-to-be-detected object body). The interference contour is removed, so that the effective contour of the actual object to be detected can be obtained, further processing can be performed in a more targeted manner, the processing object is simplified, and the working efficiency is improved.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where the intercepting a target image according to an effective contour of the object to be detected includes: fitting the effective outline of the object to be detected into a first target graph; mapping the coordinates of the first target graph with the coordinates of the image to be processed to obtain a second target graph; and performing screenshot processing on the second target graph to obtain a target image.
In the implementation process, the effective contour is fitted, and the coordinate of the image to be processed is projected, so that the fitted area and the fitted position point of the effective contour are more fit with the actual position of the object to be detected, the target image is more fit with the actual condition, and the accuracy of the target image is improved.
With reference to the third possible implementation manner of the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where the obtaining of the effective contour of the object to be detected in the binarized image includes: extracting the contour of the binary image to obtain the overall contour of the object to be detected; acquiring a coarse area image of an image to be processed according to the overall contour of the object to be detected; and acquiring the effective contour of the object to be detected in the coarse area image of the image to be processed.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present application provides a fifth possible implementation manner of the first aspect, where the binarized image includes a first target region binarized image, the overall contour of the object to be detected includes a first target region overall contour, the coarse region image of the image to be processed includes a coarse region image of the first target region, and the obtaining of the coarse region image of the image to be processed according to the overall contour of the object to be detected includes: carrying out contour extraction on the first target area binary image to obtain the whole contour of the first target area; fitting the overall outline of the first target area into a third target graph; expanding the third target graph according to a set rule to obtain a coarse area binary image of the first target area; and intercepting a coarse area image of the first target area according to the coarse area binary image of the first target area.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where the binarized image includes a second target region binarized image, the overall contour of the object to be detected includes an overall contour of the second target region, the coarse region image of the image to be processed includes a coarse region image of the second target region, and the obtaining the coarse region image of the image to be processed according to the overall contour of the object to be detected includes: and carrying out XOR processing on the coarse area binary image of the first target area and the overall image of the object to be detected to obtain a second target area coarse area image.
With reference to the sixth possible implementation manner of the first aspect, this application provides a seventh possible implementation manner of the first aspect, where the defect areas include one or more first defect areas located in a first target region of the image to be processed and one or more second defect areas located in a second target region of the image to be processed, the comparing the defect areas with an area threshold, and when the defect areas are greater than the first area threshold, determining that the object to be detected has a defect, includes: comparing one or more of the first defect areas to the first area threshold; comparing one or more of the second defect areas to the first area threshold; and if the first defect area is larger than the area threshold value or the second defect area is larger than the first area threshold value, indicating that the object to be detected has defects.
In the implementation process, the defect areas of different regions of the object to be detected are respectively compared with the area threshold, and when any defect area is larger than the area threshold, the object to be detected can be determined to have a defect. Through comparison of all aspects, comprehensive defect detection is realized, and the accuracy of defect detection is improved.
In a second aspect, an embodiment of the present application further provides a defect detection apparatus, including: a first contour extraction module: the method comprises the steps of extracting the outline of an object to be detected from an image to be processed to obtain a defect map of the object to be detected; a second contour extraction module: the defect contour extraction is carried out on the defect map to obtain the defect area of the object to be detected; a comparison module: and the defect area is compared with a first area threshold value, and when the defect area is larger than the first area threshold value, the defect of the object to be detected is determined.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the steps of the method of the first aspect described above, or any possible implementation of the first aspect, when the electronic device is run.
In a fourth aspect, this application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps of the defect detection method in the first aspect or any one of the possible implementations of the first aspect.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a defect detection method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an outline of an image to be processed according to an embodiment of the present disclosure;
fig. 4 is a flowchart of step 201 in a defect detection method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a partial composition of a contour of an image to be processed according to an embodiment of the present disclosure;
fig. 6 is a schematic functional block diagram of a defect detection apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
At present, with the rapid development of intellectualization and industrialization, industries such as electronic equipment, electrical equipment, precise instruments, electronic communication and the like also enter a rapid development stage. The die cutting material is widely applied to the industries, and gradually occupies an important role in each industry, and the market thereof is rapidly developed.
Since die-cutting materials play a very important role in various industries, the quality of the die-cutting materials affects the quality of the entire apparatus. Based on this, the detection of the die-cut material also becomes important. However, the detection of the current die cutting materials still depends on artificial detection, the artificial detection is influenced by subjective emotion, the vigor of people is limited, the long-time work possibly causes insufficient vigor, the detection efficiency can be reduced, and the accuracy and the high efficiency of the detection cannot be guaranteed.
The inventor of the application finds that in the process of detecting the die-cut material: in order to detect whether the die-cutting material has defects, the film-cutting material is subjected to image acquisition, defect outline and die-cutting material body outline extraction are carried out on the basis of the image of the die-cutting material, and whether the die-cutting material has defects or not is determined through the outline area. In view of the above, the present inventors propose a defect detection method, which can process an image of an object to be detected to detect the object to be detected, so as to replace artificial detection, so that the detection is more objective and accurate, and the detection efficiency is higher.
The detection method disclosed by the embodiment of the application can be used for detecting die-cutting materials, detecting chips, detecting electronic screens and detecting glass materials, but is not limited to the detection. The detection device can be used for realizing the defect detection of various objects to be detected by setting the threshold and the detection logic.
To facilitate understanding of the present embodiment, an electronic device for performing the defect detection method disclosed in the embodiments of the present application will be described in detail first.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 111, a processor 113, an input-output unit 115. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely exemplary and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The aforementioned components of the memory 111, the processor 113 and the input/output unit 115 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute the executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The memory 111 may be used to store information such as the image to be processed, the first area threshold, the second area threshold, and the like. The memory 111 may be used for temporarily storing intermediate information such as a target image, a binarized image, and a coarse area image.
The processor 113 may be an integrated circuit chip having signal processing capability. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The processor 113 is configured to perform a series of operations such as contour extraction of an image to be processed, binarization processing of the image to be processed, contour screening, comparison between a contour area and an area threshold, fitting of an effective contour, and mapping of a target graph.
The input/output unit 115 is used to provide input data to the user. The input/output unit 115 may be, but is not limited to, a mouse, a keyboard, an interactive interface, and the like.
The input/output unit 115 described above can be used to input data such as the first area threshold value, the second area threshold value, and the like. The data may be set according to the different objects to be detected, and when the objects to be detected are changed, the first area threshold and the preset condition may be modified correspondingly through the input/output unit 115.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The implementation of the defect detection method is described in detail below by means of several embodiments.
Please refer to fig. 2, which is a flowchart illustrating a defect detection method according to an embodiment of the present disclosure. The specific process shown in fig. 2 will be described in detail below.
Step 201, extracting the outline of the object to be detected from the image to be processed to obtain a defect map of the object to be detected.
Optionally, the image to be processed is an acquired image of the object to be detected. The image to be processed may be obtained by an acquisition device.
Optionally, after extracting the contour of the image to be processed, the contour of the first target region, the contour of the second target region, the contour of the defect, the contour of the unrelated region, and the like of the device to be detected may be obtained. The irrelevant outline region is a region other than the body of the object to be detected, and may be, for example, an outline of a platform on which the object to be detected is placed, an outline of impurities around the object to be detected, or the like.
For example, as shown in fig. 3, a dashed circle in the figure can be regarded as an irrelevant outline, a solid square is an outline of the body of the object to be detected, and a filled circle is an outline of a defect.
Optionally, before the contour extraction of the object to be detected is performed on the image to be processed, the image of the object to be detected may be scaled to obtain the image to be processed. The scaling process may employ a bilinear interpolation algorithm, the scaling process may employ a nearest neighbor interpolation algorithm, or the like. By scaling the image of the object to be detected, the image can be rapidly processed by using less resources.
Step 202, extracting a defect outline of the defect map to obtain a defect area of the object to be detected.
Optionally, the defect map may include a defect contour and an unrelated contour, and the defect map may also include only a defect contour. If the defect map includes a defect contour and an extraneous contour, the defect contour and the extraneous contour need to be extracted. If the defect map only includes a defect contour, the defect contour needs to be extracted.
Alternatively, the area of the defect may be calculated according to the length and width of the defect outline, and the area of the defect may also be calculated according to the radius of the defect outline.
And step 203, comparing the defect area with a first area threshold value, and determining that the object to be detected has defects when the defect area is larger than the first area threshold value.
Optionally, the first area threshold is a maximum allowable area value of the defect, and the first area threshold may be adjusted according to different objects to be detected.
Optionally, after the defect contour extraction is performed on the defect map, the length and the width of the defect can be directly obtained, the length is compared with a length threshold, the width is compared with a width threshold, and when one of the length and the width is greater than the corresponding threshold, it can be determined that the object to be detected has the defect.
In one possible implementation, as shown in fig. 4, step 201 includes: step 2011-.
And step 2011, performing binarization processing on the image to be processed to obtain a binarized image.
The binarization processing of the image is to set the gray value of a point on the image to be 0 or 255, that is, the whole image presents an obvious black-and-white effect, that is, the gray images with a plurality of brightness levels are selected by a proper threshold value to obtain a binarized image which can still reflect the whole and local features of the image. After the binary image is obtained through the binarization processing of the image, the property of the image is only related to the position of a point with a pixel value of 0 or 255 when the image is further processed, and the multi-level value of the pixel is not related, so that the processing becomes simple.
Illustratively, assuming that the current threshold is 188, if the current point value is greater than 188, the current point value is taken to be 255, and if the current point value is less than 188, the current point value is taken to be 0. According to the method, each pixel point of the image to be processed is divided, and then the binary image is obtained.
Step 2012, an effective contour of the object to be detected in the binarized image is obtained.
The effective contour of the object to be detected is a contour which meets a preset condition in the contour of the object to be detected.
Optionally, the effective profile may include a profile of the body of the object to be inspected and a profile of the defect. The effective profile may also comprise only the profile of the body of the object to be detected. And eliminating the invalid contour by setting a preset condition so as to obtain the valid contour.
Illustratively, if the preset condition is: the contour with the contour area larger than 5 is an effective contour, at this time, A, B, C, D, E five contours exist in the binarized image, and the contour areas of the five contours are respectively compared with a preset condition 5, so that the comparison result is obtained as follows: and if the areas of the contour A and the contour E are less than 5 and the areas of the contour B, the contour C and the contour D are more than 5, the contour A and the contour E can be determined to be invalid contours, and the contour B, the contour C and the contour D are valid contours. Further, the contour a and the contour E can be removed to obtain effective contours such as a contour B, a contour C, a contour D, and the like.
Step 2013, extracting the target image from the effective contour.
Alternatively, the target image may be a processed overall image of the object to be detected, and the target image may be one or more partial images of the processed object to be detected.
Alternatively, the target image may be extracted directly from the effective contour; or fitting the effective contour, and extracting an image from the fitted effective contour; the effective contour can also be mapped, and an image is extracted from the mapped effective contour.
Step 2014, performing convolution calculation on the target image according to a set algorithm to obtain a defect map of the object to be detected.
Alternatively, the convolution calculation of the target image according to the setting algorithm may be processed based on tensiflow, PyTorch, or the like.
Alternatively, the defect map may be an overall defect map of the object to be detected, and the defect map may also be a local defect map of the object to be detected.
For example, if the object to be detected is a battery, the battery is the whole object to be detected, and the tab and the cell main body are local parts of the object to be detected. The defect map may include a defect map of the entire battery, the defect map may include a tab defect map, the defect map may include a cell body defect map, and the defect map may further include a tab defect map and a cell body defect map.
Exemplarily, if the object to be detected is a chip, the chip is the whole object to be detected, and the main board and the pins are parts of the object to be detected. The defect map may include a defect map of the entire chip, the defect map may include a defect map of the motherboard, the defect map may include a defect map of the leads, and the defect map may further include a defect map of the motherboard and a defect map of the leads.
In one possible implementation, step 2012 includes: extracting a contour set of the object to be detected in the binary image, and eliminating the contour of the object to be detected which does not meet preset conditions in the contour set of the object to be detected to obtain the effective contour of the object to be detected.
Optionally, the set of contours of the object to be detected comprises one or more contours of the object to be detected. For example, the contour set of the object to be detected may include a body contour of the object to be detected, a local contour of the object to be detected, an internal defect contour of the object to be detected, or other interference contours.
Optionally, the preset condition may be that the profile area is greater than a second area threshold, the profile area is smaller than the second area threshold, or the profile area does not belong to a range of the second area threshold, the second area threshold may be used to reject the interference profile, the second area threshold may also be used to reject the defect profile, and the second area threshold may be used to reject the interference profile and the defect profile at the same time.
Illustratively, as shown in fig. 3, if the area of the dashed circle is 3 and the area of the solid square is 10, the filled circle areas are 5 and 7. At this time, the second area threshold is used for rejecting the interference contour, the preset condition may be set that the contour area is greater than 4, and the dotted circle contour may be rejected according to the preset condition.
Illustratively, as shown in fig. 3, if the area of the dashed circle is 3 and the area of the solid square is 10, the filled circle areas are 5 and 7. At this time, the second area threshold is used for simultaneously rejecting the interference contour and the defect contour, the preset condition can be set to that the contour area is larger than 8, and the dotted-line circle contour and the filled circle contour can be rejected according to the preset condition.
Optionally, the preset condition may also be whether the contour position is at a target position, and the target position may be a position of the contour of the body of the object to be detected. And when the contour position is within the target position range, indicating that the contour position meets the preset condition. And when the contour position is not in the target position range, indicating that the contour position does not meet the preset condition.
For example, as shown in fig. 3, if the solid square contour is the position of the contour of the body of the object to be detected, it may be determined that the dotted circle contour does not satisfy the preset condition, and then the dotted circle contour may be removed.
In one possible implementation, step 2012 includes: fitting the effective outline of the object to be detected into a first target graph; mapping the coordinates of the first target graph by the coordinates of the image to be processed to obtain a second target graph; and performing screenshot processing on the second target graph to obtain a target image.
Alternatively, the first target figure may be a rectangle, the first target figure may be a circle, the first target figure may be a triangle, or the like. Accordingly, the second target figure may be a rectangle, the second target figure may be a circle, the second target figure may also be a triangle, etc.
Optionally, the first target graph fit with the contour may be performed by a boundingRect or minAreaRect constructor function of Opencv.
Optionally, mapping the first target graphic coordinates with the coordinates of the image to be processed may scale up the coordinates. The first target graph coordinate is mapped by the to-be-processed image coordinate, and the mapping can also be realized by setting a corresponding model for processing
The mapping may be implemented by scaling the original size by the scaling size, the first edge of the second target graphic by the first edge x scaling of the first target graphic, the second edge of the second target graphic by the second edge x scaling of the first target graphic, and the third edge of the second target graphic by the third edge x scaling of the first target graphic, for example.
In one possible implementation, step 2012 includes: extracting the contour of the binary image to obtain the overall contour of the object to be detected; acquiring a coarse area image of an image to be processed according to the overall contour of the object to be detected; and acquiring the effective contour of the object to be detected in the coarse area image of the image to be processed.
Alternatively, the object to be detected may be considered to be composed of a plurality of partial components. The binarized image includes a first target area binarized image, a second target area binarized image, a third target area binarized image, etc. The overall contour of the object to be detected comprises a first target area overall contour, a second target area overall contour, a third target area overall contour and the like. The coarse area image of the image to be processed comprises a coarse area image of the first target area, a coarse area image of the second target area, a coarse area image of the third target area and the like. The defect areas include one or more first defect areas of the first target region and one or more second defect areas of the second target region.
Exemplarily, as shown in fig. 5, the object to be detected can be regarded as being composed of a first target region 41 and a second target region 42. The first defect area includes 411, 412 and 423 and the second defect area includes 420, 421, 422 and 423.
Optionally, the obtaining of the effective contour of the object to be detected in the coarse region image of the image to be processed may include: and performing threshold segmentation on the coarse area image of the image to be processed to obtain a binary image of the coarse area of the image to be processed. And carrying out contour extraction on the binary image of the coarse area to obtain a contour set of the object to be detected. And eliminating the contour of the object to be detected which does not meet the preset conditions in the contour set of the object to be detected to obtain the effective contour of the object to be detected.
In a possible implementation manner, acquiring a coarse region image of an image to be processed according to an overall contour of an object to be detected includes: and carrying out contour extraction on the first target area binary image to obtain the whole contour of the first target area. And fitting the whole outline of the first target area into a third target graph. Expanding the third target graph according to a set rule to obtain a coarse area binary image of the first target area; and intercepting a coarse area image of the first target area according to the coarse area binary image of the first target area.
For example, if the object to be detected is a battery, the first target region may be a tab. Obtaining the rough region image of the image to be processed according to the overall contour of the object to be detected may include: and extracting the contour of the binary image of the tab to obtain the overall contour of the tab. And fitting the integral contour of the lug into a third target pattern. Expanding the third target graph according to a set rule to obtain a coarse area binary image of the tab; and intercepting a coarse area image of the tab according to the coarse area binary image of the tab.
For example, if the object to be detected is a chip, the first target region may be a pin. Obtaining the rough region image of the image to be processed according to the overall contour of the object to be detected may include: and extracting the outline of the pin binary image to obtain the overall outline of the pin. And fitting the overall outline of the pin into a third target pattern. Expanding the third target graph according to a set rule to obtain a coarse area binary image of the pin; and intercepting a coarse area image of the pin according to the coarse area binary image of the pin.
Alternatively, the third target figure may be a rectangle, the third target figure may be a circle, the third target figure may be a triangle, or the like.
Alternatively, the setting rule may be set to expand in a specified direction, the specified direction may be set to expand to the coordinate Y axis, and the specified direction may be set to expand to the coordinate X axis.
In one possible implementation manner, obtaining a coarse area image of an image to be processed according to an overall contour of an object to be detected includes: and carrying out XOR processing on the coarse area binary image of the first target area and the overall image of the object to be detected to obtain a second target area coarse area image.
Optionally, the performing an exclusive or process on the coarse region binary image of the first target region and the whole image of the object to be detected to obtain the second target region coarse region image may include: and eliminating the coarse area binary image of the first target area from the overall image of the object to be detected to obtain a second target area coarse area image.
For example, if the object to be detected is a battery, the second target region may be a cell body. The performing xor processing on the coarse region binary image of the first target region and the overall image of the object to be detected to obtain the second target region coarse region image may include: and eliminating the coarse area binary image of the lug in the battery overall image to obtain the battery core body coarse area image.
For example, if the object to be detected is a chip, the second target area may be a main board. The performing xor processing on the coarse region binary image of the first target region and the overall image of the object to be detected to obtain the second target region coarse region image may include: and eliminating the binaryzation image of the coarse area of the pin in the whole image of the chip to obtain the image of the coarse area of the mainboard.
Optionally, if the object to be detected further includes a third target area, acquiring a rough area image of the image to be processed according to the overall contour of the object to be detected, including: and carrying out XOR processing on the coarse area binary images of the first target area and the third target area and the overall image of the object to be detected to obtain a second target area coarse area image.
Optionally, if the object to be detected further includes a third target region and a fourth target region, acquiring a coarse region image of the image to be processed according to the overall contour of the object to be detected includes: and carrying out XOR processing on the coarse area binary images of the first target area, the third target area and the fourth target area and the overall image of the object to be detected to obtain a second target area coarse area image.
In one possible implementation, comparing the defect area with an area threshold, and determining that the object to be detected has a defect when the defect area is greater than the first area threshold includes: the one or more first defect areas are compared to a first area threshold. The one or more second defect areas are compared to a first area threshold. And if the first defect area is larger than the first area threshold value or the second defect area is larger than the first area threshold value, indicating that the object to be detected has defects.
Illustratively, as shown in fig. 5, three first defect areas and four second defect areas are shown. And comparing the three first defect areas and the four second defect areas with a first area threshold respectively, and if at least one of the first defect areas or the second defect areas is larger than the first area threshold, judging that the object to be detected has defects.
Further, for better understanding, it is assumed that the first defect areas 411, 412, 413 in fig. 5 have areas of 8 mm square, 6 mm square, 3 mm square, respectively, and the second defect areas 420, 421, 422, 423 have areas of 4 mm square, 7 mm square, 10 mm square, 13 mm square, respectively. If the first area threshold value is 15 square millimeters, all the defect areas of the first defect area and the second defect area are smaller than the first area threshold value, it can be determined that no defect exists in the object to be detected.
It is assumed that the first defect areas 411, 412, 413 in fig. 5 have areas of 8 mm, 6 mm, and 3 mm, respectively, and the second defect areas 420, 421, 422, 423 have areas of 4 mm, 7 mm, 10 mm, and 13 mm, respectively. If the first area threshold value is 10 square millimeters at this time, 423 in the second defect area is larger than the first area threshold value, it can be determined that a defect exists in the object to be detected.
Optionally, if the object to be detected further includes a third target region and a fourth target region, the object to be detected is characterized to have a defect if the first defect area is greater than the area threshold, or the second defect area is greater than the first area threshold, or the third defect area is greater than the first area threshold, or the fourth defect area is greater than the first area threshold.
According to the method and the device, the image to be processed is obtained after a series of processing is carried out on the image of the object to be detected, and the image to be processed is subjected to processing such as contour extraction, contour screening, defect identification and the like, so that the defect in the object to be detected is detected. The defects in the object to be detected can be more objectively discriminated by processing and detecting based on the image. Meanwhile, the processing mode in the application is to carry out a series of processing and recognition on the image, and the processes are realized based on automation equipment such as a computer, so that the working intensity of workers is greatly reduced, and the detection efficiency is improved.
Based on the same application concept, a defect detection apparatus corresponding to the defect detection method is further provided in the embodiments of the present application, and since the principle of solving the problem of the apparatus in the embodiments of the present application is similar to that in the embodiments of the defect detection method, the implementation of the apparatus in the embodiments of the present application may refer to the description in the embodiments of the method, and repeated details are not repeated.
Please refer to fig. 6, which is a schematic diagram of functional modules of a defect detection apparatus according to an embodiment of the present disclosure. Each module in the defect detection apparatus in this embodiment is configured to perform each step in the above-described method embodiment. The defect detection device comprises a first contour extraction module 301, a second contour extraction module 302 and a comparison module 303; wherein, the first and the second end of the pipe are connected with each other,
the first contour extraction module 301 is configured to extract a contour of an object to be detected from an image to be processed, so as to obtain a defect map of the object to be detected.
The second contour extraction module 302 is configured to perform defect contour extraction on the defect map to obtain a defect area of the object to be detected.
The comparing module 303 is configured to compare the defect area with a first area threshold, and determine that the object to be detected has a defect when the defect area is larger than the first area threshold.
In a possible implementation, the second contour extraction module 302 is further configured to: carrying out binarization processing on an image to be processed to obtain a binarized image; obtaining an effective contour of an object to be detected in the binary image, wherein the effective contour of the object to be detected is a contour which meets a preset condition in the contour of the object to be detected; extracting a target image from the effective contour; and performing convolution calculation on the target image according to a set algorithm to obtain a defect map of the object to be detected.
In a possible implementation, the second contour extraction module 302 is specifically configured to: extracting a contour set of an object to be detected in the binary image, wherein the contour set of the object to be detected comprises one or more contours of the object to be detected; and eliminating the contour of the object to be detected which does not meet the preset conditions in the contour set of the object to be detected to obtain the effective contour of the object to be detected.
In a possible implementation, the second contour extraction module 302 is specifically configured to: fitting the effective outline of the object to be detected into a first target graph; mapping the coordinates of the first target graph with the coordinates of the image to be processed to obtain a second target graph; and performing screenshot processing on the second target graph to obtain a target image.
In a possible implementation, the second contour extraction module 302 is specifically configured to: extracting the contour of the binary image to obtain the overall contour of the object to be detected; acquiring a coarse area image of an image to be processed according to the overall contour of the object to be detected; and acquiring the effective contour of the object to be detected in the coarse area image of the image to be processed.
In a possible implementation, the second contour extraction module 302 is specifically configured to: carrying out contour extraction on the first target area binary image to obtain the whole contour of the first target area; fitting the whole outline of the first target area into a third target graph; expanding the third target graph according to a set rule to obtain a coarse area binary image of the first target area; and intercepting a coarse area image of the first target area according to the coarse area binary image of the first target area.
In a possible implementation, the second contour extraction module 302 is specifically configured to: and carrying out XOR processing on the coarse area binary image of the first target area and the overall image of the object to be detected to obtain a second target area coarse area image.
In a possible implementation, the comparing module 303 is further configured to: comparing the first defect area to a first area threshold; comparing the second defect area to a first area threshold; and if the first defect area is larger than the area threshold value or the second defect area is larger than the first area threshold value, indicating that the object to be detected has defects.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the defect detection method described in the above method embodiment.
The computer program product of the defect detection method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the defect detection method described in the above method embodiment, which may be referred to specifically for the above method embodiment, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method of defect detection, comprising:
extracting the outline of the object to be detected from the image to be processed to obtain a defect map of the object to be detected;
extracting a defect outline of the defect map to obtain the defect area of the object to be detected;
and comparing the defect area with a first area threshold value, and determining that the object to be detected has defects when the defect area is larger than the first area threshold value.
2. The method according to claim 1, wherein the extracting the contour of the object to be detected from the image to be processed to obtain a defect map comprises:
carrying out binarization processing on the image to be processed to obtain a binarized image;
obtaining an effective contour of an object to be detected in the binary image, wherein the effective contour of the object to be detected is a contour which meets a preset condition in the contour of the object to be detected;
extracting a target image from the effective contour;
and performing convolution calculation on the target image according to a set algorithm to obtain a defect map of the object to be detected.
3. The method according to claim 2, wherein said obtaining the effective contour of the object to be detected in the binarized image comprises:
extracting a contour set of an object to be detected in the binarized image, wherein the contour set of the object to be detected comprises one or more contours of the object to be detected;
and rejecting the contour of the object to be detected which does not meet the preset condition in the contour set of the object to be detected to obtain the effective contour of the object to be detected.
4. The method of claim 2, wherein said extracting a target image from said active contour comprises:
fitting the effective outline of the object to be detected into a first target graph;
mapping the coordinates of the first target graph with the coordinates of the image to be processed to obtain a second target graph;
and performing screenshot processing on the second target graph to obtain a target image.
5. The method according to claim 2, wherein the obtaining the effective contour of the object to be detected in the binarized image comprises:
extracting the contour of the binary image to obtain the overall contour of the object to be detected;
acquiring a coarse area image of an image to be processed according to the overall contour of the object to be detected;
and acquiring the effective contour of the object to be detected in the coarse area image of the image to be processed.
6. The method according to claim 5, wherein the binarized image comprises a first target region binarized image, the overall contour of the object to be detected comprises a first target region overall contour, the coarse region image of the image to be processed comprises a coarse region image of a first target region, and the obtaining of the coarse region image of the image to be processed according to the overall contour of the object to be detected comprises:
carrying out contour extraction on the first target area binary image to obtain the whole contour of the first target area;
fitting the overall outline of the first target area into a third target graph;
expanding the third target graph according to a set rule to obtain a coarse area binary image of the first target area;
and intercepting a coarse area image of the first target area according to the coarse area binary image of the first target area.
7. The method according to claim 6, wherein the binarized image comprises a second target region binarized image, the overall contour of the object to be detected comprises a second target region overall contour, the coarse region image of the image to be processed comprises a coarse region image of a second target region, and the obtaining of the coarse region image of the image to be processed according to the overall contour of the object to be detected comprises:
and carrying out XOR processing on the coarse area binary image of the first target area and the overall image of the object to be detected to obtain a second target area coarse area image.
8. The method according to claim 1, wherein the defect areas comprise one or more first defect areas located in a first target region of the image to be processed and one or more second defect areas located in a second target region of the image to be processed, the comparing the defect areas with a first area threshold value, and when the defect areas are greater than the first area threshold value, determining that the object to be detected is defective comprises:
comparing one or more of the first defect areas to the first area threshold;
comparing one or more of the second defect areas to the first area threshold;
and if the first defect area is larger than the first area threshold value or the second defect area is larger than the first area threshold value, indicating that the object to be detected has defects.
9. A defect detection apparatus, comprising:
a first contour extraction module: the method comprises the steps of extracting the outline of an object to be detected from an image to be processed to obtain a defect map of the object to be detected;
a second contour extraction module: the defect contour extraction is carried out on the defect map to obtain the defect area of the object to be detected;
a comparison module: and the defect area is compared with a first area threshold value, and when the defect area is larger than the first area threshold value, the defect of the object to be detected is determined.
10. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 8 when the electronic device is run.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 8.
CN202210291632.2A 2022-03-23 2022-03-23 Defect detection method and device, electronic equipment and readable storage medium Pending CN114627092A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210291632.2A CN114627092A (en) 2022-03-23 2022-03-23 Defect detection method and device, electronic equipment and readable storage medium
PCT/CN2022/140036 WO2023179122A1 (en) 2022-03-23 2022-12-19 Defect detection method and apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210291632.2A CN114627092A (en) 2022-03-23 2022-03-23 Defect detection method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114627092A true CN114627092A (en) 2022-06-14

Family

ID=81903121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210291632.2A Pending CN114627092A (en) 2022-03-23 2022-03-23 Defect detection method and device, electronic equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN114627092A (en)
WO (1) WO2023179122A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115829921A (en) * 2022-09-16 2023-03-21 宁德时代新能源科技股份有限公司 Method and device for detecting battery cell defects and computer-readable storage medium
CN115953373A (en) * 2022-12-22 2023-04-11 青岛创新奇智科技集团股份有限公司 Glass defect detection method and device, electronic equipment and storage medium
CN116758031A (en) * 2023-06-16 2023-09-15 上海感图网络科技有限公司 Golden finger defect rechecking method, device, equipment and storage medium
WO2023179122A1 (en) * 2022-03-23 2023-09-28 广东利元亨智能装备股份有限公司 Defect detection method and apparatus, electronic device, and readable storage medium
CN117611799A (en) * 2023-11-28 2024-02-27 杭州深度视觉科技有限公司 Penicillin bottle defect detection method and device based on image recognition
CN117870564A (en) * 2024-03-11 2024-04-12 宁德时代新能源科技股份有限公司 Detection method and system for cell Mylar film

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058155B (en) * 2023-10-13 2024-03-12 西安空天机电智能制造有限公司 3DP metal printing powder spreading defect detection method, device, equipment and medium
CN117115157B (en) * 2023-10-23 2024-02-06 湖南隆深氢能科技有限公司 Defect detection method, system, terminal equipment and medium based on PEM (PEM) electrolytic cell
CN117173185B (en) * 2023-11-03 2024-01-19 东北大学 Method and device for detecting area of rolled plate, storage medium and computer equipment
CN117474908B (en) * 2023-12-26 2024-05-28 宁德时代新能源科技股份有限公司 Labeling method, labeling device, labeling equipment and computer-readable storage medium
CN117723491A (en) * 2024-02-07 2024-03-19 宁德时代新能源科技股份有限公司 Detection system and detection method for battery cell explosion-proof valve

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3964267B2 (en) * 2002-06-04 2007-08-22 大日本スクリーン製造株式会社 Defect detection apparatus, defect detection method, and program
CN108896278B (en) * 2018-05-23 2019-12-31 精锐视觉智能科技(深圳)有限公司 Optical filter silk-screen defect detection method and device and terminal equipment
CN113436131A (en) * 2020-03-04 2021-09-24 上海微创卜算子医疗科技有限公司 Defect detection method, defect detection device, electronic equipment and storage medium
CN112200805A (en) * 2020-11-11 2021-01-08 北京平恒智能科技有限公司 Industrial product image target extraction and defect judgment method
CN112700440B (en) * 2021-01-18 2022-11-04 上海闻泰信息技术有限公司 Object defect detection method and device, computer equipment and storage medium
CN114627092A (en) * 2022-03-23 2022-06-14 广东利元亨智能装备股份有限公司 Defect detection method and device, electronic equipment and readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179122A1 (en) * 2022-03-23 2023-09-28 广东利元亨智能装备股份有限公司 Defect detection method and apparatus, electronic device, and readable storage medium
CN115829921A (en) * 2022-09-16 2023-03-21 宁德时代新能源科技股份有限公司 Method and device for detecting battery cell defects and computer-readable storage medium
CN115829921B (en) * 2022-09-16 2024-01-05 宁德时代新能源科技股份有限公司 Method, apparatus and computer readable storage medium for detecting cell defects
WO2024055569A1 (en) * 2022-09-16 2024-03-21 宁德时代新能源科技股份有限公司 Battery cell defect detection method and apparatus, and computer-readable storage medium
CN115953373A (en) * 2022-12-22 2023-04-11 青岛创新奇智科技集团股份有限公司 Glass defect detection method and device, electronic equipment and storage medium
CN115953373B (en) * 2022-12-22 2023-12-15 青岛创新奇智科技集团股份有限公司 Glass defect detection method, device, electronic equipment and storage medium
CN116758031A (en) * 2023-06-16 2023-09-15 上海感图网络科技有限公司 Golden finger defect rechecking method, device, equipment and storage medium
CN116758031B (en) * 2023-06-16 2024-03-29 上海感图网络科技有限公司 Golden finger defect rechecking method, device, equipment and storage medium
CN117611799A (en) * 2023-11-28 2024-02-27 杭州深度视觉科技有限公司 Penicillin bottle defect detection method and device based on image recognition
CN117870564A (en) * 2024-03-11 2024-04-12 宁德时代新能源科技股份有限公司 Detection method and system for cell Mylar film

Also Published As

Publication number Publication date
WO2023179122A1 (en) 2023-09-28

Similar Documents

Publication Publication Date Title
CN114627092A (en) Defect detection method and device, electronic equipment and readable storage medium
KR101934313B1 (en) System, method and computer program product for detection of defects within inspection images
TW201407154A (en) Integration of automatic and manual defect classification
CN110660072B (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN112700440B (en) Object defect detection method and device, computer equipment and storage medium
CN114495098B (en) Diaxing algae cell statistical method and system based on microscope image
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
Deng et al. Defect detection of bearing surfaces based on machine vision technique
CN111242899A (en) Image-based flaw detection method and computer-readable storage medium
CN111912846A (en) Machine vision-based surface defect and edge burr detection method
CN115526885A (en) Product image defect detection method, system, device and medium
CN115690670A (en) Intelligent identification method and system for wafer defects
CN113537414A (en) Lithium battery defect detection method, device, equipment and storage medium
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN115471476A (en) Method, device, equipment and medium for detecting component defects
CN114677348A (en) IC chip defect detection method and system based on vision and storage medium
CN115564790A (en) Target object detection method, electronic device and storage medium
JP5067677B2 (en) Defect detection method, defect detection apparatus, and program
CN117173090A (en) Welding defect type identification method and device, storage medium and electronic equipment
CN114937037B (en) Product defect detection method, device and equipment and readable storage medium
CN113537253B (en) Infrared image target detection method, device, computing equipment and storage medium
CN115239663A (en) Method and system for detecting defects of contact lens, electronic device and storage medium
CN111932515B (en) Short circuit detection method and system for product residual defects and defect classification system
JP2001099625A (en) Device and method for pattern inspection
TWI816150B (en) Method for detecting a target object, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination