CN114359178A - Panel detection method and device, electronic equipment and storage medium - Google Patents

Panel detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114359178A
CN114359178A CN202111547492.2A CN202111547492A CN114359178A CN 114359178 A CN114359178 A CN 114359178A CN 202111547492 A CN202111547492 A CN 202111547492A CN 114359178 A CN114359178 A CN 114359178A
Authority
CN
China
Prior art keywords
region
detection
area
mask
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111547492.2A
Other languages
Chinese (zh)
Inventor
张鑫
朱小明
殷亚男
匡梦良
许超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202111547492.2A priority Critical patent/CN114359178A/en
Publication of CN114359178A publication Critical patent/CN114359178A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a panel detection method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an image of a panel to be detected; determining a detection starting position and a detection ending position of a panel to be detected on the image so as to take an area between the detection starting position and the detection ending position as an interested area; performing image segmentation on the region of interest to automatically generate a mask region; in the image, panel detection is performed in the region other than the masked region. Therefore, the mask areas can be generated in batches, the technical requirements on users are reduced, and the time and the energy of the users are saved; and the generated mask area can cover the area expected to be shielded more perfectly, and the blind areas in the mask area are few, so that the accuracy of the panel detection result is obviously improved.

Description

Panel detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of panel detection technologies, and in particular, to a panel detection method, a panel detection apparatus, an electronic device, and a storage medium.
Background
Chip On Glass (COG for short) is a technology in which a driving circuit Chip is directly bonded On a Glass substrate, and is widely applied to various display products such as liquid crystal display and electroluminescence technologies. In the COG process, a Conductive pin of a driving circuit is aligned to an electrode (bump) on a glass substrate, an Anisotropic Conductive Film (ACF) is used as a bonding dielectric material, and the Conductive pin of the driving circuit is connected and conducted with the electrode on the glass substrate at a high temperature and a high voltage for a certain period of time. Similarly, the flexible circuit board On Glass (FPC On Glass, FOG for short) is a technique in which a flexible circuit board (FPC) is directly bonded to a Glass substrate, and the process is similar to COG. Similarly, Chip On Film (COF) technology is a technology in which a semiconductor chip is first packaged on a flexible substrate, and then the flexible substrate of the packaged product is bonded to a glass substrate, and the manufacturing process is similar to COG. Panel inspection techniques can be used to detect the effects of panel engagement.
The detection is usually based on visually acquiring images, and the panel joint effect is detected by processing the images. In order to improve the detection efficiency and accuracy, it is desirable to mask a partial region of the image, i.e., to generate a mask region. Thus, in the detection, only the target region other than the mask region is detected.
In the prior art, the mask regions are usually drawn one by one manually by the user. The method is low in speed, complex in operation, incapable of realizing batch production and high in requirement on users; moreover, the mask area of the user hand animation is usually large, and the number of blind areas in the user hand animation is large, so that the accuracy of the panel detection result is seriously influenced.
Disclosure of Invention
The present invention has been made in view of the above problems. According to an aspect of the present invention, there is provided a panel inspection method including: acquiring an image of a panel to be detected; determining a detection starting position and a detection ending position of a panel to be detected on the image so as to take an area between the detection starting position and the detection ending position as an interested area; performing image segmentation on the region of interest to automatically generate a mask region; in the image, panel detection is performed in the region other than the masked region.
Illustratively, determining the detection start position and the detection end position on the image includes: providing a user interface, wherein the user interface includes a first operable control; and receiving information set on the image by the user about a detection starting position and a detection ending position in response to the operation of the first operable control by the user.
Illustratively, image segmentation of the region of interest to automatically generate the mask region comprises: carrying out image segmentation on the region of interest by utilizing a gray threshold value so as to segment the region of interest into different regions; and determining the mask area based on areas meeting preset conditions in different areas, wherein the preset conditions comprise that the pixel gray scale is greater than a gray scale threshold value.
Illustratively, image segmentation is performed on the region of interest to automatically generate the mask region, further comprising: calculating the rectangularity of different areas; the preset conditions further include: the rectangle degree of the area is within the preset rectangle degree range.
Illustratively, image segmentation is performed on the region of interest to automatically generate the mask region, further comprising: respectively calculating the areas of different areas; the preset conditions further include: the area of the region is within a preset area range.
Illustratively, determining the mask area based on the areas meeting the preset condition in the different areas comprises: generating a minimum envelope parallelogram aiming at areas meeting preset conditions in different areas; based on the minimum enveloping parallelogram, the mask area is determined.
Illustratively, determining the mask region based on the minimum envelope parallelogram comprises: respectively expanding the left side and the right side of the minimum enveloping parallelogram by a first preset size, and/or respectively expanding the top edge and the bottom edge of the minimum enveloping parallelogram by a second preset size; and determining an area inside the splayed parallelogram as a mask area.
According to another aspect of the present invention, there is also provided a panel detecting apparatus including: the image acquisition module is used for acquiring an image of the panel to be detected; the ROI determining module is used for determining a detection starting position and a detection ending position of the panel to be detected on the image so as to take a region between the detection starting position and the detection ending position as a region of interest; the mask determining module is used for carrying out image segmentation on the region of interest so as to automatically generate a mask region; and the detection module is used for detecting the panel in the area except the mask area in the image.
According to yet another aspect of the present invention, there is also provided an electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions for executing the panel detection method as described above when the computer program instructions are executed by the processor.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for performing the panel detection method as described above when executed.
According to the technical scheme, the mask area of the panel can be automatically generated, and a user does not need to manually draw the mask area. The method has the advantages that the speed is high, the operation is simple, the mask areas can be generated in batches, the technical requirements on users are reduced, and the time and the energy of the users are saved; and the generated mask area can cover the area expected to be shielded more perfectly, and the blind areas in the mask area are few, so that the accuracy of the panel detection result is obviously improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 shows a schematic flow diagram of a panel detection method according to one embodiment of the invention;
FIG. 2 illustrates a partial schematic view of an image of a panel to be inspected after generating a mask region in accordance with one embodiment of the present invention;
FIG. 3 illustrates a partial schematic view of a user interface according to one embodiment of the invention;
FIG. 4 shows a schematic flow diagram of image segmentation of a region of interest according to one embodiment of the present invention;
FIG. 5 illustrates a partial schematic view of a user interface according to another embodiment of the invention;
fig. 6 shows a schematic flow chart of determining a mask region based on regions meeting a preset condition from different regions obtained by segmentation according to an embodiment of the present invention;
FIG. 7 shows a schematic block diagram of a panel detection apparatus according to one embodiment of the present invention; and
FIG. 8 shows a schematic block diagram of an electronic device according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
According to one embodiment of the present invention, a panel inspection method is provided. In the panel inspection process, sometimes, inspection is not required for a partial area. For example, when detecting a bubble region on an electrode, a small area on the top of the electrode need not be detected (i.e., this area is not contained within the area defined by the electrode boundaries). By using the panel detection method provided by the embodiment of the invention, the mask area can be automatically generated, so that the area which does not need to be detected in the image is shielded.
FIG. 1 shows a schematic flow diagram of a panel inspection method 100 according to one embodiment of the invention. As shown in fig. 1, the method 100 may include the following steps.
Step S110, an image of the panel to be detected is acquired.
The image of the panel to be detected may be an original image acquired by an image acquisition device such as a camera in the panel detection system, or may be an image obtained after preprocessing the original image. The preprocessing operation may include all operations for more clearly performing panel detection. For example, the preprocessing operation may include a denoising operation such as filtering. The image may contain all or part of the electrodes in the panel to be detected.
Step S120, determining a detection start position and a detection end position of the panel to be detected on the image to take a region between the detection start position and the detection end position as a region of interest.
As will be understood by those skilled in the art, the image of the panel to be detected is collected to detect the panel to be detected. The region of interest thus comprises at least a portion of the area on the image of the panel to be detected that the user desires to detect. For example, the detection start position and the detection end position may be first determined on the image. Then, the region between the detection start position and the detection end position is set as a region of interest. The detection start position and the detection end position may be set in response to a user's operation, for example, the user may select two points, which are respectively referred to as a left 1 point and a left 2 point, at the left side of the image of the panel to be detected by an external input device such as a mouse, a touch screen, or the like, and then the position of a line connecting the two points is regarded as the detection start position. Similarly, two points are optionally selected on the right side of the image of the panel to be detected, which are referred to as a right 1 point and a right 2 point, respectively, and then the position of a line connecting these two points is regarded as a detection end position. The region between the detection start position and the detection end position may be regarded as the region of interest. Specifically, the left 1 point and the right 1 point and the left 2 point and the right 2 point may be connected, respectively, so that a closed quadrangle may be obtained. The area within the quadrangle is taken as the region of interest. Alternatively, the user may select two points on the left side of the image of the panel to be detected, and based on the two points, a rectangular frame whose boundary is parallel to the boundary of the image of the panel to be detected may be formed, and the position of the rectangular frame is used as the detection starting position. The detection end position can be obtained by the same method. In this embodiment, a region between the detection start position and the detection end position, i.e., a region between the opposing sides of the two rectangular frames, is taken as the region of interest. Fig. 2 shows a partial schematic view of an image of a panel to be detected after generating a mask region according to an embodiment of the invention. Only the detection start position 220 (taking a rectangular frame as an example) is shown in fig. 2, the detection end position is not shown, and the region of interest can be determined from the detection start position 220 and the detection end position.
Step S130, performing image segmentation on the region of interest to automatically generate a mask region.
It will be appreciated that the masked area will typically have a difference in gray value from its surrounding area on the image of the panel to be inspected. Also taking the bubble area on the detection electrode as an example described above, the small area on the top of the electrode has a significant gray level difference with the background area around it. Illustratively, the region of interest may be image segmented using region-based image segmentation, edge-based image segmentation, or the like, to obtain at least one segmented region. The divided regions are determined, and the mask region 210 (not all shown) shown in fig. 2 is automatically generated according to the determination result. The user may set different determination conditions according to actual requirements to automatically generate different mask regions, which is not limited herein. The mask area may be generated by masking a local area in the region of interest with a specific pattern, so that the masked local area does not participate in the subsequent panel detection process. Therefore, the data volume of panel detection of the interested area can be reduced, and the negative influence of the interested area on the detection result can be avoided. The particular pattern used for shielding may be referred to as a mask.
In step S140, panel detection is performed in a region other than the masked region in the image.
According to the step S130, a mask region can be automatically generated, and panel detection can be performed on regions in the image except for the mask region. Panel detection items include, but are not limited to, conductive particle detection, foreign object detection, and the like.
According to the technical scheme, the mask area of the panel can be automatically generated, and a user does not need to manually draw the mask area. The method has the advantages that the speed is high, the operation is simple, the mask areas can be generated in batches, the technical requirements on users are reduced, and the time and the energy of the users are saved; and the generated mask area can cover the area expected to be shielded more perfectly, and the blind areas in the mask area are few, so that the accuracy of the panel detection result is obviously improved.
Exemplarily, the step S120 of determining the detection start position and the detection end position on the image may include the steps S121 and S122.
Step S121, providing a user interface, where the user interface includes a first operable control.
FIG. 3 illustrates a partial schematic view of a user interface 300 according to one embodiment of the invention. The first actionable control 310 is shown in FIG. 3 as "left area start" and "right area end".
Step S122, in response to the operation of the first operable control by the user, receives information about the detection start position and the detection end position set on the image by the user.
Illustratively, the panel inspection method 100 described above may be used for defect inspection of panels to be inspected. The user may first click the "set defect detection" control on the user interface 300 using a mouse, and in response to the user's manipulation, a new sub-interface may be generated on the user interface 300, which includes the first operable control 310. Based on the sub-interface, the user may first set a detection start position. For example, the user firstly clicks "left area start", and secondly the user selects two optional points on the left side of the image of the panel to be detected, and the detection start position 220 as shown in fig. 2 is automatically generated based on the two points selected by the user. The detection end position can be set by adopting a similar method, and the description is omitted here.
According to the technical scheme, the detection starting position and the detection ending position can be set arbitrarily and reasonably in response to the operation of a user. Therefore, the requirements of different users can be met, and the user operation is simple.
Fig. 4 shows a schematic flow chart of the step S130 of image segmenting the region of interest for automatically generating the mask region according to one embodiment of the present invention. As shown in fig. 4, step S130 includes the following steps.
Step S131, performing image segmentation on the region of interest by using a grayscale threshold to segment the region of interest into different regions.
It is understood that image segmentation refers to an operation of identifying and extracting a specific region in a region of interest based on a certain rule. In this embodiment, the region of interest may be image segmented based on a grayscale threshold. Illustratively, the gray values of all pixels in the region of interest may be respectively compared to a gray threshold value, and the region of interest may be segmented into one or more regions based on the difference between the gray values of all pixels and the gray threshold value. For example, adjacent pixels having a positive difference from the grayscale threshold are classified as one region. And (3) classifying the adjacent pixels with negative or zero difference value with the gray threshold value into one area. Whereby the region of interest can be divided into different regions. It can be marked separately for each region to distinguish different instances of the difference between the region and the grayscale threshold.
Step S132, determining a mask region based on a region meeting a preset condition in the different regions, where the preset condition includes that the gray level of the pixel is greater than the gray level threshold.
A plurality of regions obtained by dividing the region of interest can be obtained according to the above step S131. As described above, the divided region may include two regions, one is a region in which the pixel gray scale of the pixel is greater than the gray scale threshold, and is referred to as a first region. The other is that the pixel gray of the pixels in the region is less than or equal to the gray threshold, which is called a second region. Therefore, according to the preset condition, the first area with the pixel gray scale larger than the gray scale threshold value can be directly determined as the mask area. Alternatively, the first area may be further processed or determined to determine a mask area based on the first area.
FIG. 5 shows a schematic diagram of a user interface 500 according to another embodiment of the invention. The user can set the gray threshold by clicking the arrow shown in fig. 5 after "masking the gray threshold". Specifically, clicking on the up arrow may increase the grayscale threshold, and clicking on the down arrow may decrease the grayscale threshold. The value of "masking gray threshold" shown in fig. 5 is merely exemplary and does not constitute a limitation of the present application.
According to the technical scheme, more accurate segmentation results can be obtained by taking the gray threshold as the condition of image segmentation, and whether the mask area is also the result of comparison with the gray threshold is determined. Therefore, the mask area can be determined only by executing the calculation process once, the calculation amount is greatly reduced, and the working efficiency of the system is improved.
According to an embodiment of the present invention, the image segmentation of the region of interest to automatically generate the mask region in step S130 may include calculating the rectangularity of the segmented different regions. When the mask region is determined based on a region meeting a preset condition in different regions, the preset condition may include that the rectangularity of the region is within a preset rectangularity range.
The above steps can be performed by any existing or future developed algorithm for calculating the squareness degree, such as an agreement method (agreement method), which is not limited in this application.
For example, the region of interest may be image-segmented by the above-mentioned method of segmenting the region of interest based on a gray threshold to obtain different regions. Then, as described above, the regions in which the pixel gradation is larger than the gradation threshold value in the different regions can be determined. And then respectively calculating the rectangularity of each region in the regions where the determined pixel gray is greater than the gray threshold. The degree of rectangularity refers to the degree of similarity of the shape of the region to a rectangle. It is understood that the squareness may range between 0 and 1. The larger the value of the rectangle degree, the closer the shape of the area image is to the rectangle. And respectively calculating the rectangularity of the areas with the pixel gray levels larger than the gray threshold value, and then judging whether the rectangularity is within a preset rectangularity range. For example, when the squareness of a certain area is within a preset squareness range, the certain area may be determined as a mask area. Otherwise, it is not a mask region. It can be understood that the different divided regions may be determined according to the squareness, and then the gray value of the region determined to meet the squareness condition may be determined. The execution sequence of the steps is not limited in the present invention.
The user may set the preset rectangularity range through the "low rectangularity" control and the "high rectangularity" control in the user interface 500 shown in fig. 5. For the specific setting process, reference is made to the gray threshold setting process, and details are not repeated herein for brevity. It is to be understood that the values of "squareness low" and "squareness high" shown in fig. 5 are merely exemplary and do not constitute a limitation of the present application.
In many application scenarios, the area to be shielded on the panel to be detected has an approximately regular shape. Still taking the detection of the bubble area on the electrode as an example, as shown in fig. 2, the small area on the top of the electrode is approximately rectangular. Therefore, on the basis of the gray scale condition, the shape condition is added for the determination of the mask region, and the accuracy of the determination result of the mask region is further improved. In addition, the algorithm of the scheme is simple and easy to realize.
According to another embodiment of the present invention, the image segmentation of the region of interest to automatically generate the mask region in step S130 may include calculating areas of the segmented different regions, respectively. When the mask region is determined based on a region among the regions that meets a preset condition, the preset condition may include that the area of the region is within a preset area range.
For example, the region of interest may be image-segmented by the above-mentioned method of segmenting the region of interest based on a gray threshold to obtain different regions. And then, respectively calculating the areas of the different areas, and judging whether the obtained areas are within a preset area range. For example, when the area of a certain region is within a preset range, the region may be determined as a mask region. Otherwise, it is not a mask region. The user may set the preset area range through the "mask minimum area" control and the "mask maximum area" control in the user interface 500 shown in fig. 5. Wherein the minimum area of the mask represents the minimum area of the mask. The mask maximum area represents the maximum area of the mask region. For a specific setting process, reference may be made to the gray threshold setting process, which is not described herein for brevity. It is to be understood that the values of "minimum area of shield" and "maximum area of shield" shown in fig. 5 are merely exemplary and do not constitute a limitation of the present application.
For panel detection, if the mask area is too small, there may be no setting requirement; the mask area is too large, and the shielding effect cannot be realized due to the possibly contained too large blind area. On the basis of the gray threshold, an area condition is added for determining the mask area, and the accuracy of the determination result of the mask area is further improved. In addition, the algorithm of the scheme is simple and easy to realize.
Fig. 6 shows a schematic flowchart of step S132 of determining a mask region based on a region meeting a preset condition in different regions obtained by segmentation according to an embodiment of the present invention. As shown in fig. 6, step S132 may include the following steps.
In step S132a, a minimum enveloping parallelogram is generated for a region that meets a preset condition among the different regions obtained by the segmentation.
After selecting the regions meeting the preset conditions from the different regions obtained by the segmentation in the foregoing, the minimum envelope parallelograms of the regions can be respectively generated. The minimum envelope parallelogram may be generated for the region meeting the preset condition by any existing or future method for generating the minimum envelope parallelogram, which is not limited in this application.
Illustratively, in this step S132a, a minimum envelope rectangle of the region may be generated as needed to reduce the amount of computation.
In step S132b, a mask region is determined based on the minimum envelope parallelogram. After the minimum enveloping parallelogram is generated for the region meeting the preset condition, the region in the parallelogram can be directly determined as the mask region. Alternatively, some processing operations may be performed on the minimum envelope parallelogram to determine the processed minimum envelope parallelogram as the mask region.
If the mask region is obtained by relying solely on image segmentation, both the noise of the image and the accuracy of the image segmentation will have a negative impact on the mask region. In fact, most of the areas where shielding is desired are of regular shape at the time of panel inspection. The regularity of the mask region shape can be ensured based on the minimum envelope parallelogram, so that the mask region shape can be more ideally adapted to the region which a user desires to mask. Furthermore, the disturbance data in the minimal-envelope parallelogram is less compared to the minimal-envelope rectangle manually drawn by the prior art as a masked region. And a panel detection result with higher accuracy can be obtained based on the mask area with less interference data.
For example, the step S132b of determining the mask region based on the minimum enveloping parallelogram may include:
in step S132b1, the left side and the right side of the minimum enveloping parallelogram are respectively extended by a first predetermined size, and/or the top side and the bottom side of the minimum enveloping parallelogram are respectively extended by a second predetermined size.
It is understood that, on the one hand, the smallest enveloping parallelogram of the regions obtained based on image segmentation may result in a failure to perfectly mask the region desired to be masked due to problems of precision of image segmentation or problems of image sharpness. On the other hand, typically a background area surrounding the mask area is not to be detected. Thus, the size of the smallest enveloping parallelogram can be adjusted according to the shape characteristics of the electrodes in the electrode area. For example, the left and right sides of the minimum enveloping parallelogram may be respectively splayed by a first predetermined size, which may be set to any value in the range of 5-10 pixels. Still alternatively, the top and bottom sides of the minimum enveloping parallelogram may be respectively extended by a second predetermined dimension, which may be set to any value in the range of 50-80 pixels. The above-described resizing process may be performed simultaneously or separately in response to a user's operation. After the adjustment operation is completed, the parameters involved may be saved for subsequent addition of mask regions.
In step S132b2, the area inside the expanded parallelogram is determined as the mask area.
The size of the smallest enveloping parallelogram may be adjusted according to the above step S132b1 to obtain an expanded parallelogram. Then, the area inside the expanded parallelogram is determined as a mask area.
Alternatively, the user may also directly observe whether the size or position of the generated minimum enveloping parallelogram is appropriate through the user interface, and if not, the user may manually adjust the generated mask region using an input device such as a mouse. Illustratively, the user may even manually add one or more rectangles through the user interface using an input device such as a mouse, and determine the one or more rectangles as new masked areas.
In the technical scheme, the characteristics of the mask area in the panel image are fully considered, and the minimum enveloping parallelogram is expanded accordingly. Therefore, the accuracy of the determination result of the mask region is further improved, and excessive interference data introduced for the subsequent panel detection process is avoided.
According to another aspect of the invention, a panel detection device is also provided. FIG. 7 shows a schematic block diagram of a panel detection apparatus 700 according to one embodiment of the present invention. As shown in fig. 7, the apparatus 700 may include an image acquisition module 710, an ROI determination module 720, a mask determination module 730, and a detection module 740.
The image obtaining module 710 is used for obtaining an image of the panel to be detected.
The ROI determining module 720 is configured to determine a detection start position and a detection end position of the panel to be detected on the image, so as to use a region between the detection start position and the detection end position as the region of interest.
The mask determination module 730 is used for image segmentation of the region of interest to automatically generate the mask region.
The detection module 740 is configured to perform panel detection in an area other than the masked area in the image.
According to another aspect of the invention, an electronic device is also provided. Fig. 8 shows a schematic block diagram of an electronic device 800 according to an embodiment of the invention. As shown in fig. 8, the electronic device 800 may include a processor 810 and a memory 820. The memory 820 stores therein computer program instructions, which when executed by the processor 810, are used to execute the panel detection method 100.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for executing the panel detection method 100 described above when executed. The storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, Read Only Memory (ROM), Erasable Programmable Read Only Memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
A person skilled in the art can understand specific implementation schemes of the panel detection apparatus, the electronic device, and the storage medium by reading the above description related to the panel detection method, and details are not described herein for brevity.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some of the blocks in a panel detection apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A panel inspection method, comprising:
acquiring an image of a panel to be detected;
determining a detection starting position and a detection ending position of the panel to be detected on the image so as to take an area between the detection starting position and the detection ending position as an interested area;
performing image segmentation on the region of interest to automatically generate a mask region;
and detecting a panel in a region outside the mask region in the image.
2. The method of claim 1, wherein the determining a detection start position and a detection end position on the image comprises:
providing a user interface, wherein the user interface includes a first operable control;
receiving information on the detection start position and the detection end position set on the image by the user in response to the operation of the first operable control by the user.
3. The method of claim 1 or 2, wherein the image segmenting the region of interest to automatically generate mask regions comprises:
carrying out image segmentation on the region of interest by utilizing a gray threshold so as to segment the region of interest into different regions;
determining the mask area based on areas meeting preset conditions in the different areas, wherein the preset conditions comprise that the pixel gray scale is greater than a gray scale threshold value.
4. The method of claim 3, wherein the image segmenting the region of interest to automatically generate mask regions, further comprises:
calculating the rectangularity of the different regions;
the preset conditions further include: the rectangle degree of the area is within the preset rectangle degree range.
5. The method of claim 3, wherein the image segmenting the region of interest to automatically generate mask regions, further comprises:
respectively calculating the areas of the different regions;
the preset conditions further include: the area of the region is within a preset area range.
6. The method according to claim 3, wherein said determining said mask area based on the area meeting the preset condition among said different areas comprises:
generating a minimum enveloping parallelogram aiming at areas meeting preset conditions in the different areas;
determining the mask region based on the minimum enveloping parallelogram.
7. The method of claim 6, wherein said determining said mask region based on said minimum envelope parallelogram comprises:
respectively extending the left side edge and the right side edge of the minimum enveloping parallelogram by a first preset size, and/or respectively extending the top edge and the bottom edge of the minimum enveloping parallelogram by a second preset size; and
and determining the area inside the expanded parallelogram as the mask area.
8. A panel testing apparatus, comprising:
the image acquisition module is used for acquiring an image of the panel to be detected;
the ROI determining module is used for determining a detection starting position and a detection ending position of the panel to be detected on the image so as to take a region between the detection starting position and the detection ending position as a region of interest;
the mask determining module is used for carrying out image segmentation on the region of interest so as to automatically generate a mask region;
and the detection module is used for detecting the panel in the area outside the mask area in the image.
9. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer program instructions for execution by the processor to perform the panel detection method of any of claims 1 to 7.
10. A storage medium having stored thereon program instructions for performing, when executed, the panel detection method of any one of claims 1 to 7.
CN202111547492.2A 2021-12-16 2021-12-16 Panel detection method and device, electronic equipment and storage medium Pending CN114359178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111547492.2A CN114359178A (en) 2021-12-16 2021-12-16 Panel detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111547492.2A CN114359178A (en) 2021-12-16 2021-12-16 Panel detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114359178A true CN114359178A (en) 2022-04-15

Family

ID=81100123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111547492.2A Pending CN114359178A (en) 2021-12-16 2021-12-16 Panel detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114359178A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437556B1 (en) * 2008-02-26 2013-05-07 Hrl Laboratories, Llc Shape-based object detection and localization system
CN108230321A (en) * 2018-01-19 2018-06-29 深圳市亿图视觉自动化技术有限公司 Defect inspection method and device
CN110378218A (en) * 2019-06-13 2019-10-25 大亚湾核电运营管理有限责任公司 A kind of image processing method, device and terminal device
CN113295698A (en) * 2021-04-29 2021-08-24 苏州天准软件有限公司 Defect detection method, storage medium and detection system
CN113610772A (en) * 2021-07-16 2021-11-05 广州大学 Method, system and device for detecting bottle bottom code spraying defects of pop cans and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437556B1 (en) * 2008-02-26 2013-05-07 Hrl Laboratories, Llc Shape-based object detection and localization system
CN108230321A (en) * 2018-01-19 2018-06-29 深圳市亿图视觉自动化技术有限公司 Defect inspection method and device
CN110378218A (en) * 2019-06-13 2019-10-25 大亚湾核电运营管理有限责任公司 A kind of image processing method, device and terminal device
CN113295698A (en) * 2021-04-29 2021-08-24 苏州天准软件有限公司 Defect detection method, storage medium and detection system
CN113610772A (en) * 2021-07-16 2021-11-05 广州大学 Method, system and device for detecting bottle bottom code spraying defects of pop cans and storage medium

Similar Documents

Publication Publication Date Title
CN111340752A (en) Screen detection method and device, electronic equipment and computer readable storage medium
JP4657869B2 (en) Defect detection apparatus, image sensor device, image sensor module, image processing apparatus, digital image quality tester, defect detection method, defect detection program, and computer-readable recording medium
CN115908269B (en) Visual defect detection method, visual defect detection device, storage medium and computer equipment
KR101032446B1 (en) Apparatus and method for detecting a vertex on the screen of a mobile terminal
CN112734774B (en) High-precision fundus blood vessel extraction method, device, medium, equipment and system
Yang et al. An accurate mura defect vision inspection method using outlier-prejudging-based image background construction and region-gradient-based level set
Sridevi et al. A survey on monochrome image segmentation methods
CN113469921B (en) Image defect repairing method, system, computer device and storage medium
CN112734828B (en) Method, device, medium and equipment for determining center line of fundus blood vessel
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
CN112308797B (en) Corner detection method and device, electronic equipment and readable storage medium
Xu et al. Structure-aware window optimization for texture filtering
CN116468726B (en) Online foreign matter line detection method and system
Yu et al. Optimized self-adapting contrast enhancement algorithm for wafer contour extraction
CN114359176B (en) Panel detection method and device, electronic equipment and storage medium
CN112734773B (en) Sub-pixel-level fundus blood vessel segmentation method, device, medium and equipment
Luo et al. Adaptive canny and semantic segmentation networks based on feature fusion for road crack detection
Angulo et al. Method for edges detection in digital images through the use of cellular automata
CN114359178A (en) Panel detection method and device, electronic equipment and storage medium
CN114359174B (en) Conductive particle identification method, device, electronic equipment and storage medium
CN114359179A (en) Panel detection method, system, electronic device and storage medium
CN114359177A (en) Image display method and device for panel to be detected, electronic equipment and storage medium
Vijayalakshmi et al. Novel Pre-processing Stage for Classification of CT Scan Covid-19 Images.
US8942447B2 (en) Method and apparatus for tissue region identification
CN112734784B (en) Method, device, medium and equipment for determining fundus blood vessel boundary with high precision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination