CN112683786A - Object alignment method - Google Patents

Object alignment method Download PDF

Info

Publication number
CN112683786A
CN112683786A CN201910987140.5A CN201910987140A CN112683786A CN 112683786 A CN112683786 A CN 112683786A CN 201910987140 A CN201910987140 A CN 201910987140A CN 112683786 A CN112683786 A CN 112683786A
Authority
CN
China
Prior art keywords
image
detection
photosensitive element
light
alignment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910987140.5A
Other languages
Chinese (zh)
Other versions
CN112683786B (en
Inventor
蔡昆佑
杨博宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Original Assignee
Mitac Computer Kunshan Co Ltd
Getac Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitac Computer Kunshan Co Ltd, Getac Technology Corp filed Critical Mitac Computer Kunshan Co Ltd
Priority to CN201910987140.5A priority Critical patent/CN112683786B/en
Publication of CN112683786A publication Critical patent/CN112683786A/en
Application granted granted Critical
Publication of CN112683786B publication Critical patent/CN112683786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

An object alignment method, comprising: detecting a plurality of first alignment structures of the object under the rotation of the object, wherein a plurality of second alignment structures of the object sequentially face a photosensitive element in the rotation process of the object; when the first alignment structures reach a preset shape, stopping the rotation of the object and performing an image acquisition procedure of the object; wherein, the image capturing procedure comprises: capturing a test image of the object, wherein the test image comprises an image block presenting a second alignment structure facing the photosensitive element currently; detecting the presenting position of an image block in the test image; when the image block is positioned in the middle of the test image, capturing a detection image of the object; and when the image block is not positioned in the middle of the test image, displacing the object in the first direction, and returning to execute the step of capturing the test image of the object. Therefore, the artificial neural network system establishes a more accurate prediction model according to the detection images at the same position, and the probability of misjudgment is further reduced.

Description

Object alignment method
[ technical field ] A method for producing a semiconductor device
The present disclosure relates to an object surface inspection system, and more particularly, to an object alignment method for an object surface inspection system.
[ background of the invention ]
The defect detection of the product is a very important part of the industrial production process, and the product with defects cannot be sold, or if the intermediate product with defects is sold to other manufacturers for processing, the final product cannot work. One of the conventional defect detection methods is to manually observe a product to be detected with naked eyes or touch the product with two hands to determine whether the product has defects, such as pits, scratches, color differences, defects, etc., however, the efficiency of manually detecting whether the product has defects is poor, and erroneous determination is likely to occur, which results in the problem that the yield of the product cannot be controlled.
[ summary of the invention ]
In one embodiment, an object alignment method is suitable for aligning an object. The object alignment method comprises the steps of detecting a plurality of first alignment structures of an object under the rotation of the object, wherein a plurality of second alignment structures of the object sequentially face a photosensitive element in the rotation process of the object; and stopping the rotation of the object and performing an image capturing procedure of the object when the plurality of first alignment structures reach a predetermined pattern. Wherein, the step of performing the image capturing procedure of the object comprises: capturing a test image of the object by using a photosensitive element, wherein the test image comprises an image block presenting a second alignment structure facing the photosensitive element; detecting the presenting position of an image block in the test image; when the image block is positioned in the middle of the test image at the presenting position, capturing a detection image of the object by using the photosensitive element; and when the image block is not positioned in the middle of the test image at the presenting position, the object is displaced in a first direction, and the step of capturing the test image of the object by the photosensitive element is returned to be executed.
In one embodiment, an object alignment method is suitable for aligning an object. The object alignment method comprises sequentially displacing a plurality of surface blocks of an object to a detection position, wherein the object is provided with a plurality of alignment structures; capturing a detection image of each surface block sequentially positioned on the detection position by using a photosensitive element, wherein the photosensitive element faces the detection position, and a plurality of alignment structures are positioned in the visual angle of the photosensitive element; splicing a plurality of detection images corresponding to a plurality of surface blocks into an object image; comparing the object image with a preset pattern; and adjusting the splicing sequence of the plurality of detection images when the object image does not accord with the preset pattern.
In summary, according to the embodiment of the object alignment method in the present disclosure, whether the object is aligned is determined by analyzing the presentation type and the presentation position of the specific structure of the object in the test image, so as to capture the detection image at the same position on each surface block according to the aligned object. Therefore, the artificial neural network system can establish a more accurate prediction model according to the detection images at the same position, and the probability of misjudgment is further reduced.
[ description of the drawings ]
Fig. 1 is a schematic diagram of an embodiment of an object surface detection system according to the present disclosure.
FIG. 2 is a functional block diagram of an embodiment of the object surface detection system of FIG. 1.
FIG. 3 is a schematic diagram illustrating an embodiment of relative optical positions of an object, a light source module, and a photosensitive element.
FIG. 4 is a schematic diagram illustrating another embodiment of relative optical positions of an object, a light source module, and a photosensitive element.
FIG. 5 is a schematic view of an embodiment of an article.
FIG. 6 is a top view of the article of FIG. 5.
Fig. 7 is a flowchart of an embodiment of an object alignment method according to the present disclosure.
Fig. 8 is a flowchart illustrating an object alignment method according to another embodiment of the disclosure.
FIG. 9 is a diagram illustrating an embodiment of an object image.
FIG. 10 is a diagram illustrating an embodiment of detecting an image.
FIG. 11 is a flowchart of an embodiment of a test procedure.
FIG. 12 is a flow chart of another embodiment of a test procedure.
FIG. 13 is a schematic diagram illustrating another embodiment of the relative optical positions of an object, a light source module, and a photosensitive element.
FIG. 14 is a schematic view of an embodiment of a surface profile.
FIG. 15 is a schematic view of another embodiment of the relative optical positions of an object, a light source module, and a photosensitive element.
Fig. 16 is a schematic diagram of another embodiment of an object surface detection system according to the present disclosure.
Fig. 17 is a schematic diagram of another embodiment of an object surface detection system according to the present disclosure.
FIG. 18 is a schematic view of another embodiment of an object image.
[ detailed description ] embodiments
Referring to fig. 1, the object surface detection system is adapted to scan an object 2 to obtain at least one detection image of the object 2. In some embodiments, the surface of the object 2 may have at least one surface type, and the corresponding inspection image may also present image areas of the surface type. Herein, the surface pattern is a three-dimensional microstructure. Here, the three-dimensional fine structure is a sub-micron size to a micron (μm) size. I.e., the longest side or diameter of the three-dimensional mesostructure is between sub-micrometers to micrometers. Sub-micron means <1 μm, for example, 0.1 μm to 1 μm. For example, the three-dimensional microstructure may be a 300nm to 6 μm microstructure. In some embodiments, the surface topography may be surface structures such as slots, cracks, bumps, sand holes, air holes, bumps, scratches, edges, textures, and the like.
Referring to fig. 1 to 4, the system for inspecting the surface of an object includes a driving assembly 11, a light source assembly 12, a photosensitive element 13 and a processor 15. The processor 15 is coupled to the driving assembly 11, the light source assembly 12 and the photosensitive element 13. The light source assembly 12 and the photosensitive elements 13 face a detection position 14 on the driving assembly 11. The drive assembly 11 carries the object 2 to be inspected. The object 2 has a surface 21, and along an extending direction (hereinafter referred to as a first direction D1) of the surface 21 of the object 2, the surface 21 of the object 2 is divided into a plurality of surface sections. In some embodiments, the surface 21 of the object 2 is divided into nine surface areas, three of which 21A-21C are exemplarily shown in the figure. However, the present application is not limited thereto, and the surface 21 of the object 2 can be divided into other number of surface blocks according to actual requirements, such as 3 blocks, 5 blocks, 11 blocks, 15 blocks, 20 blocks, and any number thereof.
In some embodiments, referring to fig. 5 and 6, the object 2 includes a body 201, a plurality of first alignment structures 202, and a plurality of second alignment structures 203. The first alignment structure 202 is located at one end of the body 201, and the second alignment structure 203 is located at the other end of the body 201. In some embodiments, the first alignment structure 202 can be a post, a bump, a slot, or the like. The second alignment structure 203 may be a post, a bump, a slot, etc. In some embodiments, the second alignment structures 203 are spaced along the extending direction of the surface 21 of the body 201 (i.e., the first direction D1), and the spacing distance between any two adjacent second alignment structures 203 is greater than or equal to the viewing angle of the photosensitive element 13. In some embodiments, the second alignment structures 203 correspond to the surface sections 21A-21C of the object 2, respectively. Each second alignment structure 203 is aligned with the middle of the side of its corresponding surface area along the first direction D1.
The first alignment structure 202 is a post (hereinafter referred to as an alignment post) and the second alignment structure 203 is a slot (hereinafter referred to as an alignment slot). In some embodiments, the extending direction of each alignment pillar is substantially the same as the extending direction of the body 201, and one end of each alignment pillar is coupled to one end of the body 201. The alignment slot is located at the other end of the body 201, and surrounds the body 201 with the long axis of the body 201 as the rotation axis and is disposed on the surface of the other end of the body 201 at intervals.
In some embodiments, the first alignment structures 202 are spaced apart on the body 201. In the present exemplary embodiment, three first alignment structures 202 are taken as an example, but the number is not a limitation of the present invention. When looking down the side of the main body 201, the first alignment structure 202 will assume different relative positions as the main body 201 rotates around its long axis, for example: the first alignment structures 202 are spaced apart and do not overlap (as shown in fig. 6), or any two of the first alignment structures 202 overlap but the remaining one of the first alignment structures 202 does not overlap, etc.
Referring to fig. 1 to 8, the object surface inspection system can execute an image capturing procedure. In the image capturing process, the object 2 is supported on the driving assembly 11, and one of the surface blocks 21A-21C of the object 2 is substantially located at the detection position 14. In this way, before capturing an image, the object surface inspection system performs a positioning operation (i.e. fine-tuning the position of the object 2) to align the surface area with the viewing angle of the photosensitive element 13.
In the image capturing process, the processor 15 controls the photosensitive element 13 to capture a test image of the object 2 under the illumination of the light source assembly 12 (step S11). Here, the test image includes an image block representing the second alignment structure 203 currently facing the photosensitive element 13.
The processor 15 detects the position of the image block of the test image representing the second alignment structure 203 (step S12) to determine whether the surface block currently located at the detection position 14 is aligned with the viewing angle of the photosensitive element 13.
When the position of the image block is not located in the middle of the test image, the processor 15 controls the driving element 11 to fine-tune the position of the object 2 in the first direction D1 (step S13), and returns to perform step S11. Here, the steps S11 to S13 are repeatedly executed until the processor 15 detects that the presentation position of the image block is located in the middle of the test image.
When the display position of the image block is located in the middle of the test image, the processor 15 drives the photosensitive element 13 to capture the image; at this time, the light-sensing elements 13 capture the detection image of the surface area of the object 2 under the illumination of the light source assembly 12 (step S14).
Next, the processor 15 controls the driving assembly 11 to displace the next surface area of the object 2 to the detecting position 14 in a first direction, so that the next second alignment structure 203 faces the photosensitive element 13 (step S15), and returns to perform step S11. Here, the steps S11 to S15 are repeatedly performed until the detection images of all the surface areas of the object 2 are captured. In some embodiments, the drive assembly 11 fine-tunes the amplitude of the object 2 to be smaller than the amplitude of the next surface area of the object 2.
For example, assume that the object 2 has three surface areas, and the photosensitive element 13 faces the surface area 21A of the object 2 when the image capturing process is started. At this time, under the illumination of the light source assembly 12, the light-sensing element 13 first captures a test image (hereinafter referred to as a first test image) of the object 2. The first test image includes an image block (hereinafter referred to as a first image block) representing the second alignment structure 203 corresponding to the surface block 21A. Then, the processor 15 performs an image analysis of the first test image to detect a position of the first image block in the first test image. When the position of the first image block is not located in the middle of the first test image, the driving component 11 fine-tunes the position of the object 2 in the first direction D1. After the fine adjustment, the photosensitive device 13 captures the first test image again for the processor 15 to determine whether the presenting position of the first image block is located in the middle of the first test image. On the contrary, when the position of the first image block is located in the middle of the first test image, the light sensing element 13 captures the detection image of the surface block 21A of the object 2 under the illumination of the light source assembly 12. After the capturing, the driving assembly 11 displaces the next surface area 21B of the object 2 to the detecting position 14 in the first direction D1, so that the second alignment structure 203 corresponding to the surface area 21B faces the photosensitive element 13. Then, under the illumination of the light source assembly 12, the light-sensing element 13 captures a test image (hereinafter referred to as a second test image) of the object 2, and the second test image includes an image block (hereinafter referred to as a second image block) representing the second alignment structure 203 corresponding to the surface block 21B. Then, the processor 15 performs an image analysis of the second test image to detect a position of the second image block in the second test image. When the position of the second image block is not located in the middle of the second test image, the driving component 11 fine-tunes the position of the object 2 in the first direction D1. After the fine adjustment, the photosensitive device 13 captures the second test image again for the processor 15 to determine whether the displaying position of the two image blocks is located in the middle of the second test image. On the contrary, when the position of the second image block is located in the middle of the second test image, the light sensing element 13 captures the detection image of the surface block 21B of the object 2 under the illumination of the light source assembly 12. After the capturing, the driving assembly 11 further displaces the next surface area 21C of the object 2 to the detecting position 14 in the first direction D1, so that the second alignment structure 203 corresponding to the surface area 21C faces the photosensitive element 13. Then, under the illumination of the light source assembly 12, the light-sensing element 13 captures a test image (hereinafter referred to as a third test image) of the object 2, and the third test image includes an image block (hereinafter referred to as a third image block) of the second alignment structure 203 corresponding to the surface block 21C. Then, the processor 15 performs an image analysis of the third test image to detect a position of the third image block in the third test image. When the position of the third image block is not located in the middle of the third test image, the driving component 11 fine-tunes the position of the object 2 in the first direction D1. After the fine adjustment, the photosensitive device 13 captures the third test image again for the processor 15 to determine whether the rendering position of the three image blocks is located in the middle of the third test image. On the contrary, when the position of the third image block is located in the middle of the third test image, the light sensing element 13 captures the detection image of the surface block 21C of the object 2 under the illumination of the light source assembly 12.
In some embodiments, when the object surface inspection system needs to capture an image of an object 2 with two different image capture parameters, the object surface inspection system sequentially performs an image capture procedure with each image capture parameter. The different image capturing parameters may provide the light source module 12 with different brightness L1, the light source module 12 shines at different incident angles, or the light source module 12 provides light L1 with different spectrums.
In some embodiments, referring to fig. 7, after capturing the detected images of all the surface areas 21A-21C of the object 2, the processor 15 stitches the detected images of all the surface areas 21A-21C corresponding to the object 2 into an object image according to the capturing sequence (step S21), and compares the stitched object image with a predetermined pattern (step S22). When the object image does not match the predetermined pattern, the processor 15 adjusts the stitching sequence of the detected images (step S23), and compares the object image with the predetermined pattern again after the adjustment (step S22). On the contrary, when the object image matches the predetermined pattern, the processor 15 obtains the object image of the object 2.
In some embodiments, the object surface detection system may also perform a registration procedure. After the object 2 is placed on the driving component 11, the object surface detection system performs an alignment procedure to perform object alignment, so as to determine the position where the object 2 starts to capture an image.
Referring to fig. 8, in the alignment procedure, the driving component 11 continuously rotates the object 2, and the processor 15 detects the first alignment structure 202 of the object 2 through the photosensitive element 13 while the object 2 rotates (step S01) to determine whether the first alignment structure 202 is of a predetermined type. In this way, during the rotation of the object 2, the second alignment structures 203 of the object 2 sequentially face the photosensitive elements 13.
In some embodiments, the predetermined pattern may be a relative position of the first alignment structure 202 and/or a luminance relationship of an image block of the first alignment structure 202.
In an exemplary embodiment, the photosensitive element 13 continuously captures a detection image of the object 2 while the object 2 rotates, and the detection image includes an image area representing the first alignment structure 202. The processor 15 analyzes each of the detection images to determine the relative position of the image blocks of the first alignment structure 202 in the detection image and/or the brightness relationship of the image blocks of the first alignment structure 202 in the detection image. For example, the processor 15 analyzes the inspection image to find that the image blocks of the first alignment structures 202 are spaced from each other and do not overlap, and the brightness of the image block located in the middle of the image blocks of the first alignment structures 202 is brighter than the brightness of the image blocks located at the two sides; at this time, the processor 15 determines that the first bit structure 202 is of the predetermined type. In other words, the predetermined pattern can be set by the image characteristics of the specific structure of the object 2.
When the first alignment structure 202 reaches the predetermined configuration, the processor 15 stops the rotation of the object (step S02) and performs an image capturing procedure for the object. I.e. the processor 15 controls the drive assembly 11 to stop rotating the object 2. Otherwise, the detection image is continuously captured and the imaging position and/or the imaging state of the image block of the first alignment structure 202 is analyzed.
In some embodiments, when the object surface inspection system has an alignment procedure, after acquiring the inspection images of all the surface areas 21A-21C of the object 2, the processor 15 can stitch the acquired inspection images into the object image of the object 2 in the acquisition order (step S31).
For example, taking the spindle shown in fig. 5 and 6 as an example, after the object surface inspection system performs the image capturing process (i.e., repeatedly performs steps S11 to S15), the photosensitive element 13 can capture the inspection image MB of all the surface blocks 21A to 21C. Here, the processor 15 can stitch the detected images MB of all the surface areas 21A to 21C into the object image IM of the object 2 in the capturing order, as shown in fig. 9. In this example, the photosensitive element 13 may be a linear photosensitive element. At this time, the detection image MB captured by the photosensitive element 13 can be spliced by the processor 15 without being cut. In some embodiments, the line type photosensitive element may be implemented by a line (linear) type image sensor. Wherein the line image sensor can have a field of view (FOV) of approximately 0 degree.
In another embodiment, the photosensitive element 13 is a two-dimensional photosensitive element. At this time, when the photosensitive element 13 captures the inspection image MB of the surface blocks 21A-21C, the processor 15 captures a middle region MBc of the inspection image MB based on the short side of the inspection image MB, as shown in fig. 10. Then, the processor 15 stitches the middle area MBc corresponding to all the surface areas 21A-21C into the object image IM. In some embodiments, the mid-section area MBc may have a width of, for example, one pixel (pixel). In some embodiments, the two-dimensional light sensing element may be implemented by a surface image sensor. Wherein the area image sensor has a field of view of about 5 to 30 degrees.
In some embodiments, the object surface inspection system may further comprise a test procedure. In other words, before the alignment procedure and the image capturing procedure are performed, the object surface inspection system may first perform a testing procedure to determine that the components (such as the driving component 11, the light source component 12, the light sensing elements 13, etc.) are operating normally.
In the testing procedure, referring to fig. 11, the photosensitive element 13 captures a testing image under the illumination of the light source module 12 (step S41). The processor 15 receives the test image captured by the photosensitive element 13, and the processor 15 analyzes the test image (step S42) to determine whether the test image is normal (step S43), and accordingly determines whether the test is completed. If the test image is normal (yes), it means that the photosensitive element 13 can capture a normal detection image in the step S41 of the detection process, and the object surface detection system will continue to execute the alignment procedure (step S01) or the image capture procedure (step S11).
If the test image is abnormal (determination result is "no"), the object surface inspection system may perform a calibration procedure (step S45).
In some embodiments, referring to fig. 1 and 2, the object surface inspection system may further include a light source adjustment assembly 16, and the light source adjustment assembly 16 is coupled to the light source assembly 12. Herein, the light source adjusting assembly 16 can be used to adjust the position of the light source assembly 12 to change the light incident angle θ.
In an example, referring to fig. 1, 2 and 11, the photosensitive element 13 can capture a surface area currently located at the detection position 14 as a test image (step S41). At this time, the processor 15 analyzes the test image (step S42) to determine whether the average brightness of the test image matches a predetermined brightness to determine whether the test image is normal (step S43). If the average brightness of the test image does not conform to the predetermined brightness (the determination result is "no"), it indicates that the test image is abnormal. For example, when the light incident angle θ of the light source module 12 is not appropriate, the average brightness of the test image will not meet the preset brightness; at this time, the test image may not correctly represent the predetermined surface type of the object 2 to be detected.
In the calibration procedure, the processor 15 controls the light source adjusting assembly 16 to readjust the position of the light source assembly 12 to reset the light incident angle θ (step S45). After the light source adjusting assembly 16 re-adjusts the position of the light source assembly 12 (step S45), the light source assembly 12 emits another test light having a different light incident angle θ. At this time, the processor 15 controls the photosensitive element 13 to capture an image of a surface area currently located at the detection position 14 according to another test light (step S41) to generate another test image, and the processor 15 may analyze the another test image (step S42) to determine whether the average brightness of the another test image matches the predetermined brightness (step S43). If the average brightness of another test image does not meet the predetermined brightness (no), the processor 15 controls the light source adjusting assembly 16 to readjust the position of the light source assembly 12 to readjust the light incident angle θ (step S41) until the average brightness of the test image captured by the photosensitive element 13 meets the predetermined brightness. When the average brightness of the test image is equal to the predetermined brightness (yes), the object surface inspection system then performs the following step S01 or S11 to perform the aforementioned alignment procedure or image capturing procedure.
In another embodiment, referring to fig. 1, fig. 2 and fig. 12, the processor 15 may also determine whether the setting parameter of the photosensitive element 13 is normal according to whether the test image is normal (step S43). If the test image is normal (yes), indicating that the setting parameters of the photosensitive element 13 are normal, the object surface inspection system then performs the following step S01 or S11 to perform the aforementioned alignment procedure or image capturing procedure. If the test image is abnormal (no), indicating that the setting parameters of the photosensitive element 13 are abnormal, the processor 15 further determines whether the photosensitive element 13 has performed the adjustment operation of the setting parameters (step S44). If the photosensitive element 13 has performed the adjustment of the setting parameters (yes), the processor 15 generates an alarm signal indicating the abnormality of the photosensitive element 13 (step S46). If the photosensitive element 13 does not perform the calibration operation for the setting parameters (no), the object surface detection system proceeds to the calibration procedure (step S45). The processor 15 drives the photosensitive element 13 to perform the calibration operation of the setting parameters during the calibration process (step S45). After the photosensitive device 13 performs the calibration operation (step S45), the photosensitive device 13 captures another test image (step S41), and the processor 15 then determines whether the another test image captured after the photosensitive device 13 performs the calibration operation is normal (step S43). If the processor 15 determines that the other test image is still abnormal (no), the processor 15 then determines that the photosensitive element 13 has performed the calibration operation (yes) in step S44, and the processor 15 generates an alarm signal indicating the abnormality of the photosensitive element 13 (step S46).
In some embodiments, the setting parameter of the photosensitive element 13 includes a photosensitive value, an exposure value, a focal length value, a contrast setting value, or any combination thereof. In some embodiments, the processor 15 may determine whether the average brightness or the contrast of the test image meets a predetermined brightness, so as to determine whether the setting parameters are normal. For example, if the average brightness of the test image does not meet the preset brightness, it indicates that the average brightness or the contrast of the test image does not meet the preset brightness due to any error in the setting parameters of the photosensitive element 13; if the average brightness or contrast of the test image meets the predetermined brightness, it indicates that each of the setting parameters of the photosensitive element 13 is correct.
In an embodiment, the object surface detection system may further include an audio/video display unit, the warning signal may include an image, a sound, or both, and the audio/video display unit may display the warning signal. Moreover, the object surface detection system may also have a network function, and the processor 15 may send the warning signal to the cloud for storage through the network function, or send the warning signal to other devices through the network function, so that a user at the cloud or other devices may know that the photosensitive element 13 is abnormal, and then perform a debugging operation on the photosensitive element 13.
In one embodiment, in the calibration process (step S45), the photosensitive element 13 automatically adjusts the setting parameters according to a parameter setting file. Herein, the parameter setting file stores setting parameters of the photosensitive element 13. In some embodiments, the inspector updates the parameter setting file through the user interface of the object surface inspection system, so that the photosensitive element 13 automatically adjusts the setting parameters according to the updated parameter setting file in the calibration procedure, so as to correct the wrong setting parameters.
In the above embodiment, when the image (i.e., the test image or the inspection image) is captured by the light-sensing device 13, the light source assembly 12 emits a light L1 toward the inspection position 14, and the light L1 obliquely or laterally irradiates the surface area currently located at the inspection position 14.
Referring to fig. 3 and 4, the incident direction of the light L1 forms an angle (hereinafter referred to as light incident angle θ) with the normal line 14A of the surface area of the detection position 14. That is, at the light incident end, the angle between the optical axis of the light ray L1 and the normal to the front direction 14A is the light incident angle θ. In some embodiments, the light incident angle θ is greater than 0 degrees and less than or equal to 90 degrees, i.e. the detection light ray L1 illuminates the detection position 14 with a light incident angle θ greater than 0 degrees and less than or equal to 90 degrees with respect to the normal line 14A, so that the surface area currently located at the detection position 14 is illuminated by the detection light ray L1 from a lateral direction or an oblique direction.
In some embodiments, as shown in fig. 3 and 4, the photosensitive axis 13A of the photosensitive element 13 is parallel to the normal line 14A; alternatively, as shown in fig. 13, the light sensing axis 13A of the light sensing element 13 is between the normal line 14A and the first direction D1, that is, the light sensing axis 13A of the light sensing element 13 has an included angle α with the normal line 14A. The light sensing device 13 receives the diffused light generated by the light received by the surface blocks 21A-21C, and the light sensing device 13 captures the detection images of the surface blocks 21A-21C sequentially located at the detection position 14 according to the diffused light (step S14).
In some embodiments, if the surface 21 of the object 2 includes a groove-like or hole-like surface structure according to the light incident angle θ greater than 0 degrees and less than or equal to 90 degrees, i.e., according to the light ray L1 incident laterally or obliquely, the light ray L1 does not strike the bottom of the surface structure, and the surface structure is shaded in the inspection image of the surface areas 21A-21C, so that the inspection image with sharp contrast between the surface 21 and the surface defect can be formed. Thus, the object surface inspection system or inspector can determine whether the surface 21 of the object 2 has defects by inspecting whether the image has shadows.
In some embodiments, the surface structures with different depths exhibit different intensities in the inspection image according to different light incident angles θ. In detail, as shown in fig. 4, when the light incident angle θ is equal to 90 degrees, the incident direction of the light ray L1 is perpendicular to the depth direction of the surface defect, i.e., the optical axis of the light ray L1 overlaps with the tangent of the surface at the center of the detection position; at this time, no matter the depth of the surface structure, the surface structure on the surface 21 does not generate the reflected light and the diffused light because the recess is not irradiated by the light L1, and the surface structure with the deeper depth or the shallower depth shows the shadow in the detected image, i.e. the detected image has a poor contrast, or approaches to no contrast. As shown in fig. 3, when the light incident angle θ is less than 90 degrees, the incident direction of the detection light ray L1 is not perpendicular to the depth direction of the surface structure; at this time, the light L1 irradiates a partial region of the surface structure under the surface 21, and the partial region of the surface structure is irradiated by the light L1 to generate reflected light and diffused light, so that the light-sensing element 13 receives the reflected light and diffused light from the partial region of the surface structure, and the surface structure presents an image with a brighter boundary (e.g., a boundary where a defect is raised) or a darker boundary (e.g., a boundary where a defect is depressed) in the inspection image, i.e., the inspection image has better contrast.
Also, in the case of the same light incident angle θ smaller than 90 degrees, the light sensing element 13 receives more reflected light and diffused light from the shallower surface structure than the deeper surface structure. Therefore, a shallower surface structure appears a brighter image in the inspection image than a surface structure with a larger depth-width ratio. Further, in the case that the light incident angle θ is smaller than 90 degrees, if the light incident angle θ is smaller, more reflected light and diffused light are generated in the surface structure region, the surface structure presents a brighter image in the detection image, and the brightness of the shallower surface structure in the detection image is also greater than the brightness of the deeper surface structure in the detection image. For example, compared with the detection image corresponding to the light incident angle θ of 60 degrees, in the detection image corresponding to the light incident angle θ of 30 degrees, the surface structure exhibits higher brightness; in the detection image corresponding to the light incident angle θ of 30 degrees, the light surface structure exhibits higher brightness in the detection image than the light surface structure having a greater depth.
Therefore, the size of the light incidence angle theta and the brightness of the surface structure presented on the detection image have a negative correlation relationship. The shallower surface structure appears in the inspection image as the smaller the light incident angle θ, i.e., the shallower surface structure is less recognizable by the object surface inspection system or the inspector in the case of the smaller light incident angle θ. In other words, the object surface inspection system or inspector can more easily recognize the deeper surface structure from the darker image. On the other hand, if the light incident angle θ is larger, the shallow and deep surface structures are darker in the detected image, i.e. the object surface detection system or the detector can identify all the surface structures in the case of larger light incident angle θ.
Therefore, the object surface inspection system or the inspector can set the corresponding light incident angle θ according to the predetermined hole depth of the predetermined surface structure to be inspected by the above-mentioned inverse correlation. For example, if a deeper predetermined surface defect is to be detected and a shallower predetermined surface structure is not to be detected, the light source adjustment assembly 16 may adjust the position of the light source assembly 12 to set a smaller light incident angle theta according to the light incident angle calculated by the above-mentioned negative correlation relationship, and the light source adjusting component 16 drives the light source component 12 to output the detection light L1, so that the shallow predetermined surface defect appears as a brighter image in the detection image and the deeper predetermined surface structure appears as a darker image in the detection image, if the light source adjusting component 16 intends to detect the shallow and the deeper predetermined surface defect together, the light source adjusting component 16 can adjust the position of the light source component 12 according to the light incident angle calculated by the above-mentioned negative correlation relationship to set a larger (e.g. 90 degrees) light incident angle theta, the light source adjusting assembly 16 drives the light source assembly 12 to output a detection light L1, so that the shallow and deep predetermined surface structures are shaded in the image.
For example, if the object 2 is a spindle (spindle) of a safety belt assembly applied to an automobile, the surface structure may be a sand hole or an air hole, or a bump or a scratch caused by sand dust or air in the process of manufacturing the object 2. Wherein, the depth of the sand hole or the air hole is larger than the collision mark or the scratch. If the object 2 to be detected has sand holes or air holes but the object 2 to be detected does not have impact marks or scratches, the light source adjusting assembly 16 can adjust the position of the light source assembly 12 according to the light incident angle calculated by the above-mentioned negative correlation relationship to set a smaller light incident angle θ, so that the sand holes or air holes have lower brightness in the detected image and the impact marks or scratches have higher brightness in the detected image, and the object surface detecting system or the detector can quickly identify whether the object 2 has sand holes or air holes. If the object 2 to be detected has a bump, a scratch, a sand hole and an air hole, the light source adjusting assembly 16 can adjust the position of the light source assembly 12 according to the light incident angle calculated by the above-mentioned negative correlation relationship to set a larger light incident angle θ, so that the bump, the scratch, the sand hole and the air hole all present shadows in the detected image.
In one embodiment, the light incident angle θ is related to a predetermined depth ratio of the predetermined surface defect to be detected. Referring to fig. 14, taking the example that the predetermined surface defect includes a predetermined hole depth d and a predetermined hole radius r, the predetermined hole radius r is a distance between any side surface in the predetermined surface defect and the normal line 14A, a ratio (r/d) between the predetermined hole radius r and the predetermined hole depth d is the depth ratio (r/d), and the light incident angle θ is an arctangent (r/d). Accordingly, the light source adjusting module 16 may adjust the position of the light source module 12 according to the depth ratio (r/d) of the predetermined surface defect to be detected to set the light incident angle θ in step S03. Here, the light incident angle θ should satisfy the condition of being equal to or greater than the arctangent (r/d) and less than or equal to 90 degrees, so as to obtain the best target feature extraction effect at the wavelength to be detected. The light source adjustment assembly 16 drives the light source assembly 12 to output the detection light L1 after adjusting the position of the light source assembly 12. In some embodiments, the predetermined aperture radius r may be predetermined according to the size of the surface structure of the object 2 to be detected.
In one embodiment, the processor 15 can calculate the light incident angle θ according to the above-mentioned inverse correlation and arctangent (d/r), and the processor 15 then drives the light source adjustment assembly 16 to adjust the position of the light source module 12 according to the calculated light incident angle θ.
In some embodiments, the light source module 12 may provide the light ray L1 with a wavelength between 300nm and 3000 nm. For example, the light wavelength value of the light L1 can be in the light band of 300nm-600nm, 600nm-900nm, 900nm-1200nm, 1200nm-1500nm, 1500-1800nm, or 1800nm-2100 nm. In an exemplary embodiment, the light L1 provided by the light source module 12 can be visible light. Here, the light L1 can image surface defects on the order of μm on the surface 21 in the inspection image. In some embodiments, the light L1 may have a wavelength ranging from 380nm to 780 nm. In some embodiments, the light L1 can be any one of visible light such as white light, violet light, blue light, green light, yellow light, orange light, and red light. In one embodiment, the wavelength of the white light may be 380nm to 780nm, the wavelength of the violet light may be 380nm to 450nm, the wavelength of the blue light may be 450nm to 495nm, the wavelength of the green light may be 495nm to 570nm, the wavelength of the yellow light may be 570nm to 590nm, the wavelength of the orange light may be 590nm to 620nm, and the wavelength of the red light may be 620nm to 780 nm.
In some embodiments, the light L1 provided by the light source assembly 12 can be far infrared light. (e.g., having a light wavelength in the range of 800nm to 3000 nm). Thus, the light L1 can image the surface pattern on the surface of the object 2 with a sub-micron (e.g., 300nm) level in the detection image. Herein, when the object 2 having the surface attachment is obliquely irradiated with a far infrared light provided by the light source module 12, the far infrared light can penetrate through the attachment to the surface of the object 2, so that the photosensitive element 13 can capture the surface image of the object 2 under the attachment. In other words, the far infrared light can penetrate the surface attachment of the object 2, so that the photosensitive element 13 can acquire an image of the surface 21 of the object 2. In some embodiments, the far infrared light has a light wavelength value greater than 2 μm. In some embodiments, the far infrared light has a wavelength of light having a value greater than the thickness of the attachment. Preferably, the wavelength of the far infrared light is greater than 3.5 μm. In some embodiments, the object 2 is preferably made of metal. In some embodiments, the adherent can be an oil stain, colored paint, or the like. In one example, the wavelength of the far infrared light can be adjusted according to the thickness of the attachment to be penetrated. In addition, the wavelength of the far infrared light can be adjusted according to the surface type of the object 2 to be measured, so as to perform image filtering of a micrometer (mum) structure. For example: if the sample surface has 1 μm to 3 μm fine traces or sand holes, but such phenomena do not affect the product quality, and the quality manager is interested in structural defects of 10 μm or more, the wavelength of the far infrared light L1 is selected to be in the middle wavelength (e.g., 4 μm) to obtain the best image microstructure filtering effect and low-noise image quality, and the detection of larger scale defects is not affected.
In some embodiments, the light source module 12 can have a wider light band, and the image scanning system further sets a light splitting element (not shown) that allows a specific light band to pass through the light path to generate the light L1 (or the reflected light of the light L1) with a desired light wavelength value.
In one embodiment, the processor 15 can drive the light source adjustment assembly 16 to adjust the light intensity of the far-infrared light L1 emitted by the light source assembly 12 to improve the glare phenomenon, so as to improve the quality of the detected image captured by the photosensitive element 13, thereby obtaining a low-disturbance through image. For example, the light source adjusting assembly 16 can reduce the light intensity, so that the light sensing element 13 obtains a detection image with less glare.
In another embodiment, the surface defects with different depths have different brightness in the inspection image according to different light incident angles θ, and the intensity of the glare generated by the far-infrared light L1 will vary accordingly. In other words, the processor 15 can drive the light source adjustment assembly 16 to adjust the light incident angle θ of the far-infrared light L1 emitted by the light source assembly 12, so as to effectively reduce glare, and further improve the quality of the detected image captured by the photosensitive element 13, so as to obtain a low-disturbance through image.
In another embodiment, the light source adjustment assembly 16 can determine the light wave polarization direction of the far-infrared light L1 emitted by the light source assembly 12, i.e., control the light source assembly 12 to output the polarized detected far-infrared light L1, so as to effectively reduce glare and further improve the quality of the detected image captured by the photosensitive element 13, thereby obtaining a low-disturbance through image.
In some embodiments, referring to fig. 15, the object surface detection system may further include a polarizer 17. The polarizing plate 17 is located on the optical axis 13A of the photosensitive element 13 and is disposed between the photosensitive element 13 and the detection position 14. Herein, the photosensitive element 13 captures an image of the surface of the object 2 through the polarizer 17, and the polarizer 17 performs polarization filtering to effectively avoid saturation glare caused by strong infrared light to the photosensitive element 13, thereby improving the quality of the detected image captured by the photosensitive element 13 and obtaining a low-disturbance through image.
In one embodiment, as shown in FIG. 1, the object 2 has a cylindrical shape, such as a spindle. I.e. the body 201 of the object 2 is cylindrical. Herein, the surface 21 of the object 2 may be a side surface of the body 201 of the object 2, i.e. the surface 21 is a cylindrical surface, and the surface 21 has a radian of 2 pi. Here, the first direction D1 may be a clockwise direction or a counterclockwise direction with the major axis of the body of the object 2 as the rotation axis. In some embodiments, the object 2 has a narrower configuration at one end relative to the other. In one example, the supporting element 111 may be two rollers spaced apart by a predetermined distance, and the driving motor 112 is coupled to the rotating shafts of the two rollers. Here, the predetermined distance is smaller than the diameter of the article 2 (the minimum diameter of the body). Thus, the article 2 is movably disposed between the two rollers. Moreover, when the driving motor 112 rotates the two rollers, the object 2 is driven by the surface friction between the object 2 and the two rollers, and thus rotates along the first direction D1 of the surface 21, so as to align a surface area to the detection position 14. In another example, the supporting element 111 can be a shaft, and the driving motor 112 is coupled to one end of the shaft. At this time, the other end of the rotating shaft is provided with an embedded part (such as an inserting hole). At this point, the article 2 may be removably embedded in the insert. When the driving motor 112 rotates the shaft, the object 2 is driven by the shaft to rotate along the first direction D1 of the surface 21, so that a surface area is aligned to the detecting position 14. In some embodiments, taking the surface 21 divided into 9 surface sections 21A-21C as an example, the driving motor 112 drives the supporting element 111 to rotate 40 degrees at a time, so as to drive the object 2 to rotate 40 degrees along the first direction D1 of the surface 21. In some embodiments, the angle of rotation of the driving motor 112 (to fine-tune the position of the object 2) in step S13 is smaller than the angle of rotation of the driving motor 112 (to displace the next surface segment to the detection position 14) in step S15.
In one embodiment, as shown in FIG. 16, the object 2 is plate-shaped. I.e. the body 201 of the object 2 has a plane. The surface 21 of the object 2 (i.e. the plane of the body 201) may be a non-curved surface having a curvature equal to or approaching zero. Here, the first direction D1 may be an extending direction of any side length (e.g., a long side) of the surface 21 of the object 2. In an exemplary embodiment, the supporting element 111 can be a planar supporting board, and the driving motor 112 is coupled to a side of the planar supporting board. At this time, the article 2 may be removably disposed on the flat carrier plate in the inspection process. The driving motor 112 drives the planar carrying board to move along the first direction D1 of the surface 21 to drive the object 2 to move, so as to align a surface area to the detecting position 14. Here, the driving motor 112 drives the planar-carrying board to move a predetermined distance each time, and drives the planar-carrying board to move repeatedly to sequentially move each of the surface blocks 21A-21C to the detection position 14. Here, the predetermined distance is substantially equal to the width of each surface segment 21A-21C along the first direction D1.
In some embodiments, the drive motor 112 may be a stepper motor.
In one embodiment, as shown in FIGS. 1 and 16, light source module 12 may comprise a light-emitting element. In another embodiment, as shown in fig. 3 and 4, the light source assembly 12 may include two light emitting elements 121 and 122, the two light emitting elements 121 and 122 are symmetrically disposed on two opposite sides of the object 2 with respect to the normal line 14A, the two light emitting elements 121 and 122 respectively illuminate the detection position 14, the surface 21 is illuminated by the symmetrical detection light L1 to generate a symmetrical diffusion light, and the light sensing element 13 sequentially captures the detection images of the surface blocks 21A-21C located on the detection position 14 according to the symmetrical diffusion light, so as to improve the imaging quality of the detection images. In some embodiments, the light emitting elements 121, 122 may be implemented by one or more Light Emitting Diodes (LEDs); in some embodiments, each light emitting device 121, 122 can be implemented by a laser source.
In one embodiment, the object surface inspection system may have a single set of light source modules 12, as shown in fig. 1 and 16.
In another embodiment, referring to FIG. 17, the object surface inspection system may have multiple sets of light source assemblies 12a, 12b, 12c, 12 d. The light source assemblies 12a, 12b, 12c, 12d are respectively located at different orientations of the detecting position 14, i.e. at different orientations of the carrying elements 111 for carrying the object 2. For example, light source assembly 12a may be disposed on the front side of detection location 14 (or carrier element 111), light source assembly 12b may be disposed on the rear side of detection location 14 (or carrier element 111), light source assembly 12c may be disposed on the left side of detection location 14 (or carrier element 111), and light source assembly 12d may be disposed on the right side of detection location 14 (or carrier element 111).
Here, under illumination of each light source assembly (12a, 12b, 12C, 12 d), the object surface inspection system performs an image capturing procedure to obtain the inspection images MB of all the surface areas 21A-21C of the object 2 under illumination in a specific orientation. For example, first, the object surface inspection system emits light L1 from the light source module 12 a. Under the light L1 emitted by the light source 12a, the photosensitive elements 13 capture the detected images MB of all the surface areas 21A-21C of the object 2. The object surface detection system is then switched to emit light L1 from light source module 12 b. Under the light L1 emitted by the light source 12b, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2. The object surface inspection system is then switched to emit light L1 from light source module 12 c. Under the light L1 emitted by the light source 12C, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2. The object surface detection system is then switched to emit light L1 from light source module 12 d. Under the light L1 emitted by the light source 12d, the photosensitive elements 13 capture the detection images MB of all the surface areas 21A-21C of the object 2.
In an embodiment, referring to fig. 16, the object surface inspection system may be provided with a single photosensitive element 13, and the photosensitive element 13 performs image capturing of the surface areas 21A to 21C to obtain a plurality of inspection images respectively corresponding to the surface areas 21A to 21C. In another embodiment, referring to fig. 1 and 17, the image scanning system may be provided with a plurality of photosensitive elements 13, and the photosensitive elements 13 face the detection position 14 and are arranged along the long axis of the object 2. The light-sensing devices 13 respectively capture the detection images of the surface areas of the object 2 at the detection positions 14.
In one example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a single photosensitive element 13. The photosensitive element 13 can capture images of a plurality of surface areas 21A-21C of the main body (i.e. the middle section) of the object 2 to obtain a plurality of detection images MB corresponding to the surface areas 21A-21C, and the processor 15 stitches the detection images MB of the surface areas 21A-21C into an object image IM, as shown in fig. 9.
In another example, it is assumed that the object 2 is cylindrical and the image scanning system is provided with a plurality of photosensitive elements 131-133, as shown in fig. 1 and 16. The photosensitive elements 131 to 133 respectively capture inspection images MB1 to MB3 of the surface of the object 2 located at different segment positions of the inspection position 14, and the processor 15 stitches all the inspection images MB1 to MB3 as an object image IM, as shown in fig. 18. For example, it is assumed that the number of the photosensitive elements 131 to 133 can be three, and the processor 15 splices the object image IM of the object 2 according to the detection images MB1 to MB3 captured by the three photosensitive elements 131 to 133, as shown in fig. 18. The object image IM includes a sub-object image 22 (the upper segment of the object image IM in fig. 18) spliced by the detected images MB1 of all the surface areas 21A-21C captured by the first photosensitive element 131 in the three photosensitive elements 13, a sub-object image 23 (the middle segment of the object image IM in fig. 18) spliced by the detected images MB2 of all the surface areas 21A-21C captured by the second photosensitive element 132 in the three photosensitive elements 13, and a sub-object image 24 (the lower segment of the object image IM in fig. 18) spliced by the detected images MB3 of all the surface areas 21A-21C captured by the third photosensitive element 133 in the three photosensitive elements 13.
In some embodiments, the processor 15 may automatically determine whether the surface 21 of the object 2 includes surface defects, whether the surface 21 has different textures, and whether the surface 21 has paint or oil stains, or the like, according to the obtained object image, that is, the processor 15 may automatically determine different surface types of the object 2 according to the object image. In detail, the processor 15 includes an artificial neural network system, and the artificial neural network system has a learning phase and a prediction phase. In the learning stage, the object image input into the artificial neural network system is of a known surface type (i.e. a target surface type marked to exist on the object image), and after the object image of the known surface type is input, the artificial neural network system performs deep learning according to the known surface type and a surface type class (hereinafter referred to as a preset surface type class) of the known surface type to establish a prediction model (i.e. the prediction model is composed of a plurality of hidden layers which are connected in sequence, each hidden layer has one or more neurons, and each neuron executes a judgment item). In other words, in the learning stage, the object images of the known surface types are used to generate the judgment items of each neuron and/or adjust the weight of the connection of any neuron, so that the prediction result (i.e. the output preset surface type) of each object image conforms to the known and labeled and learned surface type.
For example, the aforementioned surface type may be sand holes or air holes, bumps or scratches, and the image areas representing different surface types may be imaged image areas having sand holes with different depths, imaged image areas having no sand holes and having bumps or scratches, imaged image areas having different surface roughness, imaged image areas having no surface defects, imaged image areas having surface types representing different depth ratios by irradiating the surface areas 21A to 21C with the detection light L1 with different light wavelength values to generate different contrasts, or imaged image areas having attachments with different colors. In the learning stage, the artificial neural network system performs deep learning according to the object images of various surface types to establish a prediction model for identifying various surface types. In addition, the artificial neural network system can classify the object images with different surface types to generate different preset surface type categories in advance. Then, in the prediction stage, after the obtained object image is input into the artificial neural network system, the artificial neural network system executes the prediction model according to the input object image to identify the object image showing the surface type of the object 2 in the object image. The prediction model classifies the object image of the surface type of the object according to a plurality of predetermined surface type categories. In some embodiments, at the output of the prediction model, the prediction model may perform a percentage prediction on the object image according to predetermined surface defect classes, i.e., predict the percentage of the object image that may fall into each class.
For example, taking the surface blocks 21A-21C as an example, the artificial neural network system performs the above prediction model according to the object images of the spliced surface blocks 21A-21C, and the artificial neural network system can recognize that the surface block 21A includes sand holes and impact marks, the surface block 21B has no surface defects, the surface block 21C includes sand holes and paint, and the surface roughness of the surface block 21A is greater than that of the surface block 21C by using the object image of the object 2; then, taking six categories of preset surface type including sand hole or air hole, scratch mark or impact mark, high roughness, low roughness, having attachment and having no surface defect as an example, the artificial neural network system can classify the detection image of the surface block 21A into a preset category of sand hole or air hole and scratch mark or impact mark, classify the detection image of the surface block 21B into a preset category having no surface defect, classify the detection image of the surface block 21C into a preset category of sand hole or air hole and a preset category having attachment, classify the detection image of the surface block 21A into a preset category having high roughness, and classify the detection images of the surface blocks 21B, 21C into a preset category having low roughness. Therefore, different surface types are identified through the artificial neural network system, the detection efficiency is greatly improved, and the probability of artificial misjudgment can be reduced.
In an embodiment, the deep learning performed by the artificial neural network system can be implemented by a Convolutional Neural Network (CNN) algorithm, but the disclosure is not limited thereto.
In summary, according to the embodiment of the object alignment method in the present disclosure, whether the object is aligned is determined by analyzing the presentation type and the presentation position of the specific structure of the object in the test image, so as to capture the detection image at the same position on each surface block according to the aligned object. Therefore, the artificial neural network system can establish a more accurate prediction model according to the detection images at the same position, and the probability of misjudgment is further reduced.
Although the present disclosure has been described with reference to particular embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure, and the scope of the present disclosure should be limited only by the terms of the appended claims.

Claims (14)

1. An object alignment method for aligning an object, comprising:
detecting a plurality of first alignment structures of the object under the rotation of the object, wherein a plurality of second alignment structures of the object sequentially face a photosensitive element in the rotation process of the object;
when the first alignment structures reach a predetermined configuration, stopping the rotation of the object and performing an image capturing procedure of the object, wherein the image capturing procedure of the object comprises the following steps:
capturing a test image of the object by using the photosensitive element, wherein the test image comprises an image block which presents the second alignment structure facing the photosensitive element;
detecting the presenting position of the image block in the test image;
when the presenting position is that the image block is positioned in the middle of the test image, capturing a detection image of the object by the photosensitive element;
when the display position is that the image block is not positioned in the middle of the test image, the object is moved in a first direction, and the step of capturing the test image of the object by the photosensitive element is returned to be executed.
2. The method according to claim 1, wherein the second alignment structures are spaced apart from each other along the first direction, and a spacing distance between any two adjacent second alignment structures is greater than or equal to a viewing angle of the photosensitive element.
3. The method according to claim 1, wherein the object is cylindrical and the first direction is clockwise.
4. The method according to claim 1, wherein the object is cylindrical and the first direction is counterclockwise.
5. The method of claim 1, wherein the object is planar.
6. The method of claim 1, wherein the step of performing the image capturing process further comprises:
after the step of capturing the detection image of the object by the photosensitive element, the object is displaced so that the next second alignment structure faces the photosensitive element, and the step of capturing the test image of the object by the photosensitive element is returned to be executed until the corresponding detection images are captured according to the second alignment structure surfaces.
7. The method of claim 6, wherein the step of performing the image capturing process further comprises:
after the corresponding detection images are captured, the processor is used for splicing the detection images into an object image.
8. The method of claim 6, wherein the step of performing the image capturing process further comprises:
after each detection image is captured, the middle section area of each detection image is captured based on the short edge of each detection image, and the middle section areas are spliced to form the object image.
9. An object alignment method for aligning an object, comprising:
sequentially displacing a plurality of surface blocks of the object to a detection position, wherein the object is provided with a plurality of alignment structures;
capturing a detection image of each surface block sequentially positioned on the detection position by using a photosensitive element, wherein the photosensitive element faces the detection position, and the plurality of alignment structures are positioned in the visual angle of the photosensitive element;
splicing the detection images corresponding to the surface blocks into an object image;
comparing the object image with a preset pattern;
and when the object image does not accord with the preset pattern, adjusting the splicing sequence of the plurality of detection images.
10. The method as claimed in claim 9, wherein the alignment structures are located at one end of the object body.
11. The method according to claim 9, wherein the body is cylindrical, and the surface of the body is divided into the plurality of surface sections in a clockwise direction.
12. The method of claim 9, wherein the body has a flat surface.
13. The method according to claim 9, wherein the step of sequentially shifting the plurality of surface blocks of the object to the detection position comprises:
a carrying element is used for carrying the object at the position measuring position of the object, and the carrying element is rotated to drive the object to rotate.
14. The method according to claim 9, wherein the step of sequentially shifting the plurality of surface blocks of the object to the detection position comprises:
a carrying element is used for carrying the object at the position measuring position of the object, and the carrying element is horizontally moved to drive the object to move.
CN201910987140.5A 2019-10-17 2019-10-17 Object alignment method Active CN112683786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987140.5A CN112683786B (en) 2019-10-17 2019-10-17 Object alignment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987140.5A CN112683786B (en) 2019-10-17 2019-10-17 Object alignment method

Publications (2)

Publication Number Publication Date
CN112683786A true CN112683786A (en) 2021-04-20
CN112683786B CN112683786B (en) 2024-06-14

Family

ID=75444665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987140.5A Active CN112683786B (en) 2019-10-17 2019-10-17 Object alignment method

Country Status (1)

Country Link
CN (1) CN112683786B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200608013A (en) * 2004-08-16 2006-03-01 Oncoprobe Biotech Inc Automatic detection method for organism disk
CN201057529Y (en) * 2007-05-16 2008-05-07 贝达科技有限公司 Object detecting machine
JP2008241716A (en) * 2008-04-03 2008-10-09 Shibaura Mechatronics Corp Device and method for surface inspection
JP2012235362A (en) * 2011-05-02 2012-11-29 Shanghai Microtek Technology Co Ltd Image scanning device capable of automatic scanning
TWM493674U (en) * 2014-08-19 2015-01-11 Microtek Int Inc Scanning optical detecting device
TWI490463B (en) * 2014-04-11 2015-07-01 Pegatron Corp Detecting method and detecting system for distinguishing the difference of two workpieces
US20150212008A1 (en) * 2012-08-07 2015-07-30 Toray Engineering Co., Ltd. Device for testing application state of fiber reinforced plastic tape
TWM533209U (en) * 2016-08-05 2016-12-01 Min Aik Technology Co Ltd An optical detection system
JP2019002765A (en) * 2017-06-14 2019-01-10 株式会社Screenホールディングス Positioning method, positioning device, and inspection apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200608013A (en) * 2004-08-16 2006-03-01 Oncoprobe Biotech Inc Automatic detection method for organism disk
CN201057529Y (en) * 2007-05-16 2008-05-07 贝达科技有限公司 Object detecting machine
JP2008241716A (en) * 2008-04-03 2008-10-09 Shibaura Mechatronics Corp Device and method for surface inspection
JP2012235362A (en) * 2011-05-02 2012-11-29 Shanghai Microtek Technology Co Ltd Image scanning device capable of automatic scanning
US20150212008A1 (en) * 2012-08-07 2015-07-30 Toray Engineering Co., Ltd. Device for testing application state of fiber reinforced plastic tape
TWI490463B (en) * 2014-04-11 2015-07-01 Pegatron Corp Detecting method and detecting system for distinguishing the difference of two workpieces
TWM493674U (en) * 2014-08-19 2015-01-11 Microtek Int Inc Scanning optical detecting device
TWM533209U (en) * 2016-08-05 2016-12-01 Min Aik Technology Co Ltd An optical detection system
JP2019002765A (en) * 2017-06-14 2019-01-10 株式会社Screenホールディングス Positioning method, positioning device, and inspection apparatus

Also Published As

Publication number Publication date
CN112683786B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
US11195045B2 (en) Method for regulating position of object
JP4719284B2 (en) Surface inspection device
US8050486B2 (en) System and method for identifying a feature of a workpiece
JP6394514B2 (en) Surface defect detection method, surface defect detection apparatus, and steel material manufacturing method
US8285025B2 (en) Method and apparatus for detecting defects using structured light
US6191850B1 (en) System and method for inspecting an object using structured illumination
TWI428584B (en) Inspection system and method operative to inspect patterned devices having microscopic conductors
TWI617801B (en) Wafer inspection method and wafer inspection device
US20210341353A1 (en) System and method for inspecting optical power and thickness of ophthalmic lenses immersed in a solution
KR20160090359A (en) Surface defect detection method and surface defect detection device
JP2014163694A (en) Defect inspection device, and defect inspection method
US20180232876A1 (en) Contact lens inspection in a plastic shell
JP5837283B2 (en) Tire appearance inspection method and appearance inspection apparatus
JP2000018932A (en) Method and device for inspecting defects of specimen
JP4151306B2 (en) Inspection method of inspection object
CN112683923A (en) Method for screening surface form of object based on artificial neural network
CN112683786B (en) Object alignment method
CN112683924A (en) Method for screening surface form of object based on artificial neural network
JP5868203B2 (en) Inspection device
CN112683921A (en) Image scanning method and image scanning system for metal surface
CN112683790A (en) Image detection scanning method and system for possible defects on surface of object
CN112683788A (en) Image detection scanning method and system for possible defects on surface of object
CN112683925A (en) Image detection scanning method and system for possible defects on surface of object
CN112686831B (en) Method for detecting object surface morphology based on artificial neural network
JP2004198403A (en) Inspection device for painting face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant