CN107209128A - Element processor and visible detection method - Google Patents
Element processor and visible detection method Download PDFInfo
- Publication number
- CN107209128A CN107209128A CN201680009805.XA CN201680009805A CN107209128A CN 107209128 A CN107209128 A CN 107209128A CN 201680009805 A CN201680009805 A CN 201680009805A CN 107209128 A CN107209128 A CN 107209128A
- Authority
- CN
- China
- Prior art keywords
- image
- vision
- based detection
- protuberance
- pallet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/89—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
- G01N21/892—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the flaw, defect or object feature examined
Landscapes
- Engineering & Computer Science (AREA)
- Textile Engineering (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
A kind of element processor of present invention design, in particular to the element processor and visible detection method of a kind of vision-based detection of executive component.The invention discloses a kind of visible detection method, the element for forming multiple spherical form protuberances (1a) on surface performs vision-based detection to multiple protruding portion (1a), it is characterised in that including:Image acquisition step, including the 3rd image for obtaining the first image, the second image and three-dimensional, the acquisition of described first image is the image produced by the first incident light with the first incident angles that 0~45 degree is formed with the surface of element (1) on surface, the acquisition of second image is the image produced by the second incident light with the second incident angles that 45~90 degree are formed with the surface of element (1) on surface, the 3rd three-dimensional image for the protuberance (1a) that the 3rd image is formed on element (1) surface;3D shape characteristic holds step, on the basis of described first image and second image, holds position and the 3D shape characteristic of the protuberance (1a), stores into 3D shape characteristic information;Profile mends intermediate step, on the basis of being stored in the 3D shape characteristic information that the 3D shape characteristic holds step, between the three-dimensional profile of the 3rd image obtained by the 3D vision test section (720) is mended.
Description
Technical field
The present invention relates to a kind of element processor, at a kind of element that vision-based detection is performed to element
Manage device and visible detection method.
Background technology
Semiconductor element is loaded on pallet etc. by semiconductor technology, cutting technique etc., carries out outbound again recently.
Here, in order to improve yield and improve the reliability after outbound, vision-based detection etc. can be performed.
In addition, the vision-based detection to semiconductor element is whether lead (lead) or spherical rack (ball grid) are broken
Damage, pair such as whether the apparent condition and surface state that have the semiconductor element of crack (crack), cut (scratch) etc. are
It is no good to be detected.
In addition, increasing the apparent condition of semiconductor element as described above and the detection of surface state, according to detection time
With the configuration of modules, influence is produced on the time performed for integrated artistic and the size of device.
Particularly load the installation of wafer, pallet etc. of multiple element, for one of the vision-based detection to each element with
On module, according to the structure of the Unload module of testing result and configuration after detection, the size of device can become different.
Also, the limited size of device is in the quantity for the element processor that can be set in element testing line, or according to
The setting of the element processor of predetermined number, to having considerable influence for the setup fee that element is produced.
The content of the invention
(the problem of solving)
It is an object of the present invention to provide a kind of element processor and visible detection method, it is based on asking as described above
The reliability of vision-based detection is put and improved to topic.
Other objects of the present invention are there is provided a kind of element processor and visible detection method, by that will be used for vision
The module of detection etc. is effectively configured, and improves element testing speed, in addition, reducing the size of device, so as to be finally reached section
Save the purpose of element producing cost.
It is a further object of the present invention to provide a kind of element processor and visible detection method, so as to lifted to
The protuberance of the spherical terminal of element surface formation etc. carries out the reliability of vision-based detection.
(means for solving problem)
The present invention is the design made to reach above-mentioned purpose, and the invention discloses a kind of element processor, it is special
Levy and be, including:Loading part 100, loads the pallet 2 equipped with multiple element 1, and make pallet 2 linearly mobile;First bottom surface vision
The transfer direction arranged perpendicular of pallet 2 in test section 410, with the loading part 100, and it is arranged on the loading part 100
Side to perform visual inspection to element 1;First guide rail 680, it is vertical with the moving direction of the pallet 2 of the loading part 100;The
One transfer instrument 610, is combined with first guide rail 680, and then is moved along first guide rail 680, and in order to perform
Visual inspection, the first bottom surface vision-based detection portion 410 is transplanted on from the pickup device of loading part 100;Facial vision on first
Test section 420, is combined with first guide rail 680, and is interlocked and moved with the mobile phase of the first transfer instrument 610,
When the first transfer instrument 610 is moved to the first bottom surface vision-based detection portion 410, in the pallet 2 equipped with the loading part 100
On element above checked;Uninstalling portion 310,320,330, is received equipped with completion visual inspection from the loading part 100
Element 1 pallet 2, according to the visual inspection result the pallet 2 classify element 1.
The present invention can also additionally include:Second guide rail 690, configuration parallel with first guide rail 680;Second bottom surface
Vision-based detection portion 430, is arranged on the side of the loading part 100, the transfer direction phase with the pallet 2 in the loading part 100
Vertically, vision-based detection is carried out to element 1;And the second transfer instrument 630, combined with second guide rail 690, and along described the
Two guide rails 690 are moved, in order to perform vision-based detection, are picked up and are transferred to the second bottom surface element from the loading part 10 and regard
Feel test section 430.
The present invention can also additionally include:Vision-based detection portion 440 above second, is combined with second guide rail 690, and with
The mobile phase of the second transfer instrument 630 is interlocked and moved, and first bottom surface is moved in the first transfer instrument 610
During vision-based detection portion 410, the 440 pairs of elements on the pallet 2 equipped with the loading part 100 in vision-based detection portion above described second
Checked above 1.
The present invention can also additionally include:Vision-based detection portion 450 above 3rd, is arranged on the support in the loading part 100
On the transfer path of disk 2, and it is perpendicular with transfer path, the side of loading part 100 is arranged on, vision-based detection is carried out to element 1.
Vision-based detection portion 450 forms vertical with the transfer path of the pallet 2 of the loading part 100 above described 3rd, and
Linearly moved in the horizontal direction.
The first bottom surface vision-based detection portion 410 may include:
Two-dimensional visual test section 710, it includes, the first image acquiring unit 712, in order to carry out 2D vision-based detections, to described first
The bottom surface for the element 1 that transfer instrument 610 is picked up carries out image acquisition, and the first light source portion 711, in order to which described first image is obtained
Take the image in portion to obtain, the irradiation of light is carried out to the bottom surface of the element 1 of the described first transfer instrument pickup;3D vision test section
720, it includes again:Second image acquiring unit, in order to carry out 3D vision-based detections, picks up and moves to the first transfer instrument 610
The bottom surface of the element 1 sent carries out image acquisition, secondary light source portion 721, in order to which the image of second image acquiring unit 722 is obtained
Take, the bottom surface for the element 1 that the first transfer instrument 610 is picked up and transferred carries out the irradiation of light.
The present invention is based on a kind of visible detection method, the element for forming multiple spherical form protuberance 1a on surface,
Vision-based detection is performed to multiple protruding portion 1a, it is characterised in that including:Image acquisition step, including obtain the first image,
3rd image of the second image and three-dimensional, the acquisition of described first image is the table by the first incident light and the element 1
The first incidence angle that face is formed, its angle is between 0 to 45 spends, and second image passes through the second incident light and the element 1
Surface formed the first incidence angle, its angle 45 to 90 spend between, the surface of element 1 formed the protuberance 1a
The 3rd three-dimensional image;3D shape characteristic holds step, on the basis of described first image and second image, holds
The position of the protuberance 1a and 3D shape characteristic, store into 3D shape characteristic information;Profile mends intermediate step, to store
It is on the basis of the 3D shape characteristic information that the 3D shape characteristic holds step, the three-dimensional of the 3rd image is outer
Between profile is mended.
The three-dimensional shape features, which hold step, to be included:Shade domain analysis step, by described first image with
Multiple shade fields formed by protuberance 1a correspondence, hold the position of the protuberance 1a, and in second image
In compared with multiple shade fields formed by the protuberance 1a is corresponding, if there is brighter field to be held;Characteristic
Storing step, in the shade domain analysis step, if there is compared with the corresponding shade field in the shade field
Brighter field, the shinny neck that the protuberance 1a is had to the shade field corresponding with the upper part of the protuberance 1a
The information of the flat part of domain size is stored in the 3D shape characteristic information.The profile mends intermediate step, to store
There is the moon corresponding with the upper part of the protuberance (1a) in the corresponding protuberance (1a) of the 3D shape characteristic information
The flat partial information of the shinny field size in shadow field is as benchmark, between the three-dimensional profile of the 3rd image is mended.
The 3D shape characteristic, which holds step, includes characteristic storing step, is the ring part with described first image
On the basis of size, the protuberance 1a on the surface of element 1 center location information is stored in the 3D shape special
Property information.The profile mends the described prominent of the surface that intermediate step is the element 1 to be stored in the 3D shape characteristic information
On the basis of the center location information for going out portion 1a, between can the three-dimensional profile of the 3rd image be mended.
The present invention is based on a kind of visible detection method, its member for forming the protuberance 1a of multiple spherical forms on surface
Part 1 performs the multiple protuberance 1a vision-based detection, and disclosed visible detection method is characterised by, including:Image
Obtaining step, makes slit light be relatively moved relative to the surface of the element 1, the slit illumination is penetrated in the table of the element 1
Face, and its first incidence angle and the angle of 0~90 degree of the surface formation of the element, pass through light trigonometry and determine the element
While height on 1 surface, the first image that the surface for the element 1 that the slit illumination is penetrated is formed is obtained;Slit
In light analytical procedure, the described first image obtained in described image obtaining step, from higher than the neck for presetting pixel value
In domain, the maximum position of the height determined in described image obtaining step is appointed as to the vertex position of the protuberance 1a.
The protuberance 1a can be spherical terminal.
The slit light can be monochromatic light.
(The effect of invention)
A kind of element processor and visible detection method of the present invention is based on multiple Knobs to being formed on element
Vision-based detection is carried out, on the basis of the two dimensional image of multiple Knobs, can flexibly be applied to the Knobs
Between the image of shape is mended, the credibility detected to 3D vision can be improved, while detection speed can be significantly increased, is had
Such advantage.
Particularly, in the shape of the second image of the second incident light of the second incident light, the i.e. elevation angle, formed in middle body
In the case of shinny field, there is flat part, therefore the graphical analysis detected in 3D vision in the top of Knobs
When by flexibly using, can a high 3D vision detection credibility, and detection speed can be significantly increased.
And the first incident light, i.e., on the basis of the ring-shaped of the first image produced by first incident light at base angle, ball
Shape protuberance can speculate the position at its center relative to the surface of element, so that the credibility of 3D vision detection is improved, and
Detection speed can be significantly increased.
Also, the element processor of the present invention, on the loading part for installing the pallet equipped with multiple element, will be used for vision
The vision-based detection module configuration of detection and the side of loading part, in order to which vision-based detection moves the transfer instrument that element is picked up from pallet
When moving vision-based detection module, to be loaded into detect above the element of pallet above detection module and transfer instrument
Mobile phase is interlocked, and the surface of element is detected, and is configured according to the efficiency of module, with reduction element processor size
Advantage.
Also, the element processor of the present invention is configured on the basis of same size according to the efficiency do not overstated, by true
Certain excess room is protected, the vision-based detection module of additional visual detection can be additionally provided for, can also additionally increase and hold
The structure of row resolution ratio or the different vision-based detection of detection content, so that the advantage with increase element processor function.
Also, vision-based detection and above detection are carried out successively, therefore the detection efficiency to element can be improved, improve member
The detection speed of part processor, it is final that there is the advantage for improving element processor performance.
Also, the element processor of the present invention can reduce the size of device, according to the raising of performance, finally can significantly drop
The manufacturing expense of low element, has the advantage that.
Also, the element processor and visible detection method of the present invention, the protuberance of element surface, particularly in detection ball
During the vertex position of shape terminal, illumination slit light is in element surface, the image irradiated from element, with higher than presetting picture
In the field of element value, the maximum penetrated by slit illumination and determine height is appointed as the position on protuberance summit, therefore, depending on
Feel that the credibility being repeated of detection and the speed of vision-based detection can be significantly increased.
Brief description of the drawings
Fig. 1 is a plan of the element processor for illustrating first embodiment of the invention.
Fig. 2 a are a concept maps of the vision-based detection module for illustrating first embodiment of the invention.
Fig. 2 b are the plans of pictorial image 2a vision-based detection module configuration.
Fig. 3 a are the concept maps of pictorial image 2a vision-based detection module variation.
Fig. 3 b are the plans of pictorial image 3a vision-based detection module configuration.
Fig. 3 c are the concept maps of pictorial image 3a vision-based detection module variation.
The concept map for the seats in a restaurant style characteristic information type that Fig. 4 diagrams pass through visible detection method of the present invention.
Fig. 5 is a plan of the element processor for illustrating second embodiment of the invention.
In Fig. 6 a to Fig. 8 b, as the process for the visible detection method for performing the present invention, schemed according to the position of protuberance
Show the concept map of the change of slit light, and Fig. 6 a and Fig. 6 b are by before protuberance summit, Fig. 7 a and Fig. 7 b are protuberances
Summit, Fig. 8 a and Fig. 8 b is the accompanying drawing that slit illumination behind protuberance summit penetrates pattern.
Fig. 9 is that during visible detection method of the present invention is performed, the height of the protuberance of measure and reality are protruded
The curve map that the relation of the height in portion is illustrated.
Embodiment
Hereinafter, with regard to the element processor and visible detection method of the present invention, carried out as described below with reference to the accompanying drawing of addition.
The element processor of the present invention, as shown in figure 1, including:Loading part 100, loading pallet 2, and make the linearly shifting of pallet 2
Dynamic, the pallet 2 is equipped with multiple element 1;First bottom surface vision-based detection portion 410, the transfer with the pallet 2 in the loading part 100
Direction is vertical, and is arranged on the side of loading part 100 to perform visual inspection to element 1;First guide rail 680, it is and described
The moving direction of the pallet 2 of loading part 100 is vertical;First transfer instrument 610, is combined with first guide rail 680, and then along
First guide rail 680 is moved, and in order to perform visual inspection, described is transplanted on from the pickup device of loading part 100
One bottom surface vision-based detection portion 410;First vision-based detection portion 420 above, is combined with first guide rail 680, and with described first
The mobile phase of transfer instrument 610 is interlocked and moved, and the first bottom surface vision-based detection portion is moved in the first transfer instrument 610
When 410, to being checked above the element 1 on the pallet 2 equipped with the loading part 100;Uninstalling portion 310,320,330,
The pallet 2 equipped with the element 1 for completing visual inspection is received from the loading part 100, according to the visual inspection result in the support
The classification element 1 of disk 2.
Here element 1 as complete semiconductor technology element, including:Wafer level chip-size package WL-CSP
(Wafer level chip scale package), SD internal memories, flash memory, CPU etc., as long as there is spherical reticulated grid etc. to dash forward on surface
Go out the element of portion 1a formation, can be above-mentioned object.
The pallet 2 loads the structure of transfer as 8 ﹡ 10 of the formation of multiple element 1 ranks, generally by memory element etc.
Standardized.
The loading part 100 equipped with the element 1 for detecting object and can carry out vision-based detection, can have as the structure loaded
There are various structures.
As one, the state that the loading part 100 is disposed with the resettlement groove formed in pallet 2 will be equipped with multiple element 1
Pallet 2 transferred.
The loading part 100 can have various structures, and such as Fig. 1 and KR published patent announce 10-2008-0092671
Shown in, it may include:Guide portion (not shown), the pallet 2 of boot-loader multiple element 1 is moved;Drive division (not shown) is used for
Pallet 2 is moved along guide portion.
The first bottom surface vision-based detection portion 410 is as vertical with the transfer direction of the pallet 2 in loading part 100 and set
In the side of loading part 100, the 2D vision-based detections of executive component 1 and the structure of some vision-based detection in 3D vision-based detections,
There can be various structures.
Particularly described first bottom surface vision-based detection portion 410 is as using equipment such as video camera, scanners, to element 1
The outward appearance of bottom surface etc. carries out the structure of image acquisition, can have various structures.
Here, the imagery exploitation software etc. obtained by the first bottom surface vision-based detection portion 410, flexibly applies to pair
After graphical analysis whether be defective products etc. vision-based detection.
In addition, the first ground vision-based detection portion 410 can have various structures, its structure according to the species of vision-based detection
2D vision-based detections and 3D vision-based detections can preferably be carried out.
As one, the first bottom surface vision-based detection portion 410 may include:First image acquiring unit 712, for 2D visions
Detection, the image for the bottom surface of element 1 that the first transfer instrument 610 of acquisition is picked up;Two-dimensional visual test section 710, it includes first
Light source portion 711, is obtained for the image of the first image acquiring unit 712, the element 1 picked up to the described first transfer instrument 610
Bottom surface irradiation light;Second image acquiring unit 722, for 3D vision-based detections, the member that the first transfer instrument 610 is picked up and transferred
The bottom surface of part 1 obtains image;3D vision test section 720, including secondary light source portion 721, for the second image acquiring unit 722
Image is obtained, the bottom surface irradiation light for the element 1 for picking up and transferring to the first transfer instrument 610.
Particularly, the first bottom surface vision-based detection portion 410 can be detected according to two-dimensional visual test section 710 and 3D vision
The structure in portion 720 and configuration, with various structures.
First, the first bottom surface vision-based detection portion 410 can have KR published patent such as to announce 10-2010-
The structure shown in embodiment and Fig. 2 a and Fig. 2 b in No. 0122140.
The secondary light source portion 721 of 3D vision test section 720 described here can have various structures, can be used and laser one
Monochromatic light, white light of sample etc..
Particularly, if very small as the 3D shape of measure object, then serious in the irregular reference phenomenon of laser
And in the case of determining difficulty, it is preferable to use monochromatic light, white light.
Also, the structure in the secondary light source portion 721 of the 3D vision test section 720 may include:Fiber optics, be preferably to
The surface of element 1 is penetrated with slit form with slit illumination, transmits light from light source;Slit portion, is connected with fiber optics, will
The light irradiation of shape of slit is in the surface of element 1.
If in addition, excessive as the size of the element 1 of measure object, by a video camera (scanner) to element
When the height of the ledges such as spherical terminal, the bulge on 1 surface carries out three-dimensional measurement, certain difficulty is had.
In this regard, the 3D vision test section 720 can include more than two second image acquiring units 722.
At this moment, the 3D vision test section 720 may include:Distinguish corresponding light with the second image acquiring unit 722 respectively
Source portion 721, a light source portion 721 as shown in Figure 3 a and Figure 3 b shows;And a pair of second image acquiring units 722, with light source portion
On the basis of center, configured in the way of point symmetry.
Also, the first bottom surface vision-based detection portion 410 is in configuration 720 grades of two-dimensional visual test sections of 3D vision test section
When 710, its structure can be as shown in Fig. 2 a and Fig. 2 b, it is overlapped on the basis of the moving direction of element 1 and constituted, or
As shown in Fig. 3 a to Fig. 3 c, two-dimensional visual test section 710 and 3D vision test section 720 can be configured successively.
Particularly as shown in Figure 3 b, in the situation of configuration two-dimensional visual test section 710 and 3D vision test section 720 successively
Under, the first bottom surface vision-based detection portion 410 can be configured a pair the in moving direction of the 3D vision test section 720 along element 1
Two image acquiring units 722, configure light source portion 721 between a pair of second image acquiring units 722.
And as shown in Figure 3 c, in the situation of configuration two-dimensional visual test section 710 and 3D vision test section 720 successively
Under, the moving direction that the first bottom surface vision-based detection portion 410 can be in the 3D vision test section 720 along element 1 is matched somebody with somebody successively
Put 722 grades of light source portions 721 of the second image acquiring unit.
First guide rail 680 is supported described later as the moving direction arranged perpendicular with the pallet 2 in loading part 100
The structure that it is moved is guided while vision-based detection portion 420 above first transfer instrument 610 and first, there can be various structures.
Particularly described first guide rail 680 is provided with Linear Driving module, for driving first to transfer instrument 610 and first
The linear movement in vision-based detection portion 420 above, and support member 681 is combined with, make it moveable simultaneously, make the first transfer
Instrument 610 and first be combined with each other and supported in vision-based detection portion 420 above, so that the first transfer instrument 610 and first is regarded above
Feel that test section 420 is mutually interlocked and moved.
Support member 681 is carried out linearly moving structure by the existing drive module as along the first guide rail 680,
Can have rotation motor, the structure of conveyer belt and pulley, rotation jack structure etc..
The support member 681 as above the first transfer instrument 610 and first vision-based detection portion 420 be combined
Structure, and make the first transfer instrument 610 with first above vision-based detection portion 420 mutually interlock and move, as long as
It is that said two devices can be enable to carry out linearly moving moving structure along the first guide rail 680, arbitrary structures can.
The first transfer instrument 610 is combined with first guide rail 680 and moved along the first guide rail 680, by element from
Loading part 100 picks up and is transferred to the first bottom surface vision-based detection portion 410, can as the structure for being so used to perform vision-based detection
With various structures.
As one, the first transfer instrument 610 preferably includes more than one pick tool (not shown), for member
The pickup of part 1, and the pick tool is in order to improve detection speed etc., can set multiple with a row or multiple row etc..
The pick-up can have various other structures as the structure for picking up element 1 by vacuum pressure.
Vision-based detection portion 420 is combined with first guide rail 680 above described first, and transfers instrument with described first
610 mutually interlock, when the first transfer instrument 610 is moved to the first bottom surface vision-based detection portion 410, to being loaded into loading part 100
Detected, as the structure with such function, there can be various other structures above the element 1 of pallet 2.
Described first element that 420 pairs of vision-based detection portion is loaded on pallet 2 above carries out image acquisition, for the member of acquisition
The image of part 1, it is particularly corresponding with the bottom surface obtained by the first bottom surface vision-based detection portion 410, and above analysis element 1
Image, detects its state.
Particularly, the described first vision-based detection portion 420 above can flexibly apply to the text to being marked on the surface of element 1
The mark of word, mark etc. is detected.
In addition, vision-based detection portion 420 is picked up by the first transfer instrument 610 and regarded in the first bottom surface above described first
Feel that the detection of test section 410 performs that vision-based detection is carried out to the element 1 that is loaded on pallet 2 is most efficiency after completing.
And vision-based detection portion 420 can be according to testing conditions, to an element 1 or more than two above described first
Element 1 carries out the acquisition of image.
The uninstalling portion 310,320,330 is used as the support equipped with element 1 for completing vision-based detection from the loading part 100
The structure that disk 2 is received and classified according to vision-based detection result to the element 1 on affiliated pallet 2, can have various other knots
Structure.
The structure of the uninstalling portion 310,320,330 is preferably, with the structure similar to loading part 100, according to element 1
The quantity of testing result assign the classification grade of non-defective unit (G), bad 1 or abnormal 1 (R1), bad 2 or abnormal 2 (R2) etc..
Also, guide portion (not shown) can be set in the uninstalling portion 310,320,330, is set in parallel in loading part 100
Side;And unloading tray portion, including drive division (not shown), for pallet 2 to be moved along guide portion.
In addition, the pallet 2 also includes:Empty pallet portion 200, can between 100 grades of uninstalling portion of loading part 310,320,330
By tray conveying apparatus transfer (not shown), and supply the empty pallet 2 in the unloaded element 1 of uninstalling portion 310,320,330.
At this moment, the structure in the empty pallet portion 200 can also include:Guide portion (not shown), is set in parallel in loading part
100 side;And drive division (not shown), for pallet 2 to be moved along guide portion.
And classification tool 620 can also be additionally set in the uninstalling portion 310,320,330, in each unloading support
Element 1 is transferred according to the classification grade of each unloading tray portion between pan portion.
The classification tool 620 can be identical with the first transfer instrument 610 illustrated before, or with similar configuration example
Such as many array structures or an array structure.
In addition, the embodiment of the state unloading for being loaded again on the pallet 2 that loading part 100 is loaded has been carried out
Explanation, as long as but the uninstalling portion 310,320,330 with being loaded on the tape carrier for forming the pocket for accommodating element 1 and
The structure of unloading, that is, the structure that element 1 is held and unloaded comprising winding belt package etc., can have various other structures.
Element processor with structure as described above, on the loading part 100 for loading the pallet 2 equipped with multiple element 1,
It will be configured for the vision-based detection module of vision-based detection (the first bottom surface vision-based detection module 410) in the side of loading part, in order to
Vision-based detection is carried out, when the first transfer instrument 610 that element 1 is picked up being moved into vision-based detection module from pallet 2, to being loaded into
Detection module (vision-based detection portion 420 above first) and the first transfer instrument above being detected above the element 1 of pallet 2
610 mobile phase is interlocked, and then is configured to being detected above element 1, therefore according to the efficiency of module, can have contracting
The advantage of small element processor size.
In addition, the element processor of the present invention is based on above the first bottom surface vision-based detection portion 410 and first as described above
The configuration in vision-based detection portion 420, it is more than needed with space, therefore can additionally be provided for assigning other in element processor
The module of additional functionality, i.e., including performing the vision-based detection portion 420 above the first bottom surface vision-based detection portion 410 and first that is different from
Other kinds of vision-based detection module etc..
As one, element processor of the invention can also include as shown in Figure 1:Second guide rail 690, described can be filled
On the basis of the transfer direction of the pallet 2 in load portion 100, at the rear of the first guide rail 680 and the parallel configuration of the first guide rail 680;The
Two bottom surfaces vision-based detection portion 430, are arranged on the side of loading part 100, are mutually hung down with the transfer direction of the pallet 2 in loading part 100
Directly, so executive component 1 vision-based detection;Second transfer instrument 630, is combined with the second guide rail 690, and can be along the second guide rail
690 and move, and in order to perform vision-based detection, element is picked up from loading part 100 and the second bottom surface vision-based detection is transferred to
Portion 430.
Second guide rail 690 as the rear of the first guide rail 680 using the transfer direction of the pallet 2 of loading part 100 as
Benchmark, the structure with the parallel configuration of the first guide rail 680 can have the structure similar to the first guide rail 680.
The second bottom surface vision-based detection portion 430 is arranged on the side of loading part 100, with the pallet in loading part 100
Transfer direction is perpendicular, any one in extra 2D vision-based detections and 3D vision-based detections is performed to element 1, as with this
The structure of sample, can have the structure similar to the first bottom surface vision-based detection portion 410, species that also can be according to vision-based detection and side
Formula, with various other structures.
As one, the second bottom surface vision-based detection portion 430 is different from the picture of the detection of trickle slight crack, trickle scraping etc.
Element, but perform at least one of which in 2D vision-based detections and 3D vision-based detections.
The second transfer instrument 630 is combined with the second guide rail 690, and is moved along the second guide rail 690, as in order to hold
Element is picked up and is transferred to the structure in the second bottom surface vision-based detection portion 430 by row vision-based detection from loading part 100, can have with
The identical structure of the first transfer instrument 610 illustrated before or similar structure.
In addition, second guide rail 690 is as shown in figure 5, can also additionally set follow-up vision-based detection portion (not shown) above,
I.e. second vision-based detection portion 440 above, is mutually interlocked with the first bottom surface vision-based detection portion 410 for illustrating before, perform with first
Facial vision test section 420 similar movement and detection.
That is, vision-based detection portion 440 above of described second bottom surface vision-based detection portion 430 and second, its configuration mode can be with
The combination and gearing movement in the first vision-based detection portion 420 above bottom surface vision-based detection portion 410 and first are same or similar.
In other words, the first bottom surface vision-based detection portion 410 and first above vision-based detection portion 420 combining structure such as
, can be along the transfer direction of the inner pallet 2 of loading part 100 in the form of multiple row (Fig. 5 situation is configured with two row) shown in Fig. 5
And configure.
At this moment, the first bottom surface vision-based detection portion 410 and first above vision-based detection portion 420 can along with loading part
The perpendicular direction of the transfer direction of pallet 2 in 100 and configure.
Also, more than one guide rail 680,690 also can be set, allows the edge of vision-based detection portion 420 above described first
The direction perpendicular with the moving direction of the pallet 2 in loading part 100 linearly to be moved.
In addition, second guide rail 690 is as the second transfer instrument of guiding 630 and second, vision-based detection portion 440 is entered above
The linearly moving structure of row, it can have the structure similar to the first guide rail 680.
Further, the guide rail 690 can form one in the first guide rail 680, at this moment, the first bottom surface vision-based detection portion
410 and second above vision-based detection portion 420 be configurable on the front side of the first guide rail 680, the second bottom surface vision-based detection portion 430 and
Vision-based detection portion 440 is configurable on the rear side of the first guide rail 680 above second.
The second bottom surface vision-based detection portion 430 is used as execution and the first similar detection in vision-based detection portion 420 above
Structure, can have the structure similar to test section 420 above described first, also, regard as execution 2D vision-based detections and 3D
Feel the vision-based detection structure of at least one in detection, there can be various other structures, wherein vision-based detection above above-mentioned first
Bottom surface vision-based detection portion 430 of portion 410 and second interlocks and performs mobile and detect.
Also, the element processor can also additionally include:Vision-based detection portion 450 above 3rd, it is regarded in the first bottom surface
Feel above test section 410 and second on the basis of the combining structure in vision-based detection portion 420, as shown in figure 5, being arranged on loading part
On the transfer path of pallet 2 in 100, the vision-based detection of executive component 1.
Described 3rd above vision-based detection portion 450 between loading part 100 and uninstalling portion 310,320,330 in order to prevent support
Disk 2 by tray conveying apparatus it is (not shown) transfer when produce interference, can be along the transfer path with the pallet 2 in loading part 100
Linearly moved in perpendicular direction.
That is, vision-based detection portion 450 is arranged on the end section of loading part 100 above the described 3rd, in order to prevent the quilt of pallet 2
Interference is produced during tray conveying apparatus transfer (not shown), it can match somebody with somebody to the side of loading part 100 to movement on the right side of accompanying drawing
Put.
In addition, vision-based detection portion 450 above the described 3rd, is used as vision-based detection portion 420 above first with illustrating before
Or the similar structure in vision-based detection portion 440 above second, as long as it is able to carry out in 2D vision-based detections and 3D vision-based detections extremely
Few vision-based detection of any one, can have various other structures.
In addition, if if just the vision-based detection of the component handling device with structure as described above is illustrated, then joining
Description below is carried out according to accompanying drawing, but the visible detection method illustrated below is not limited to the element of embodiments of the invention
The structure of processing unit.
The visible detection method of the present invention, as shown in Figure 4, it is characterised in that for forming multiple spherical structures on surface
Protuberance 1a element 1, perform the multiple protuberance 1a vision-based detection.
Also, the vision acquisition methods of the present invention include:Image acquisition step, including the first image of acquisition, second
3rd image of image and three-dimensional, the acquisition of described first image is the surface shape by the first incident light and the element 1
Into the first incidence angle, its angle is between 0 to 45 spends, and second image passes through the table of the second incident light and the element 1
The first incidence angle that face is formed, its angle is between 45 to 90 spend, and the three of the protuberance 1a formed on the surface of element 1
3rd image of dimension;3D shape characteristic holds step, on the basis of described first image and second image, holds described
Protuberance 1a position and 3D shape characteristic, store into 3D shape characteristic information;Profile mends intermediate step, with described three
Dimension style characteristic is held on the basis of the 3D shape characteristic information stored in step, is detected between benefit by the 3D vision
The three-dimensional profile for the 3rd image that portion 720 is obtained.
Described image obtaining step, including the 3rd image for obtaining the first image, the second image and three-dimensional, institute
The acquisition for stating the first image be by with the surface of element 1 formed 0~45 degree the first incidence angle (base angle) incidence first
The image that incident light is produced on surface, the acquisition of second image is by form 45~90 degree with the surface of element 1
The image that the second incident incident light of second incidence angle (elevation angle) is produced on surface, the 3rd image is formed on the surface of element 1
The protuberance 1a the 3rd three-dimensional image.Described image obtaining step as obtain above-mentioned image the step of can by with
The identical vision-based detection module of the first bottom surface vision-based detection portion 410 of the element processor illustrated before and perform.
If illustrating again, described image obtaining step carries out three for the Knobs 1a formed to the surface of element 1
Vision-based detection is tieed up, two-dimentional first image of the angle of elevation and two-dimentional second image at base angle is obtained, obtains Knobs 1a three-dimensional
The step of image is three image.
It is described prominent as being held on the basis of described first image and the second image that the 3D shape characteristic holds step
Go out portion 1a position and 3D shape characteristic and the step of store into 3D shape characteristic information, performed by a variety of methods.
The 3D shape characteristic information stored in step is held in the 3D shape characteristic, is allowed for by advance class
The information of the two dimensional image of the 3D shape characteristic of type.
As one, if there is flat part above the Knobs 1a that the surface of element 1 is formed, pass through the angle of elevation
That is in the second image formed by the second incident light, bright field is centrally formed in shade field.
Therefore, feature is that, by the second image formed by the second incident light, if the center in shade field
Bright field is formed, there is flat part above the Knobs 1a, therefore 3D shape analysis is profile
This can be reflected in when between benefit between benefit.
Include in this regard, the 3D shape characteristic holds step:Shade domain analysis step, the first image by with institute
The position for multiple shade fields assurance protuberance 1a that protuberance 1a is correspondingly formed is stated, with the protrusion in the second image
On the inside of multiple shade fields that portion 1a is correspondingly formed, whether assurance has the field brighter than the shade field;Characteristic storage step
Suddenly, field brighter than corresponding to the shade field in the shade domain analysis step is if it does, the protuberance
The shade field of the corresponding protuberance 1a of 1a upper part, is had the letter of the flat part of shinny field size
Breath is stored in the 3D shape characteristic information.The profile mends intermediate step, is stored in the 3D shape characteristic information
Corresponding protuberance 1a, under the shinny field corresponding to the shade field of the upper part of the protuberance 1a, the hair that there will be
The flat partial information of bright field size is as benchmark, the 3rd figure that will be obtained by the 3D vision test section 720
Between the three-dimensional profile of picture is mended.
As another example, Knobs 1a can be located at element 1 according to the surface projecting degree from element 1, its center
The low position in surface, or positioned at higher position etc., its center can be formed in many ways.
But, according to the Knobs 1a on the surface of element 1 center, the elevation angle is by the of the first incident light
In two images, the size and thickness of ring-shaped can be continually changing.
In this regard, in the second image by the first incident light, on the basis of the size and thickness of ring-shaped, can estimate
This, when dimension shape analysis is between profile is mended, can be reflected in benefit by the Knobs 1a on the surface of element 1 center
Between.
Therefore, the 3D shape characteristic, which holds step, includes characteristic storing step, with the ring part of described first image
On the basis of the size divided, the protuberance 1a on the surface of element 1 center location information is stored in the 3D shape special
Property information, the profile mends intermediate step, to be stored in described in the surface of in the 3D shape characteristic information, element 1
It is on the basis of protuberance 1a center, the three-dimensional of the 3rd image obtained by the 3D vision test section 720 is outer
Between profile is mended.
In addition, method is carried out by the following method between the benefit of the three-dimensional profile, i.e., according to the 3rd image, to by
The radius at the center for the Knobs 1a that three images are parsed carries out least mean square algorithm processing (LMS, least mean
Square) by between the profile benefit on the surface of element 1.
In addition, the grade of the first bottom surface vision-based detection portion 410 can perform the bottom to element 1 as vision-based detection module
The 3D vision detection of shape, the positions of protuberance such as the spherical terminal that face is formed etc., at this moment for protuberances such as spherical terminals
Shape and position carry out 3D vision detection when be preferably it is more accurately determined.
In this regard, the visible detection method of the present invention is as shown in Fig. 6 a to Fig. 8 b, it is characterised in that for forming many on surface
The protuberance 1a of individual spherical form element 1, carries out the vision-based detection to multiple protruding portion 1a.
Also, the visible detection method of the present invention includes:Image acquisition step, the surface relative to element 1 can carry out phase
Penetrated to movement, and by the slit illumination with the first incidence angle that 0~90 degree of angle is formed with the surface of element 1 in element 1
Surface, while determining the height on element 1 surface by light trigonometry, obtain the element 1 that slit illumination is penetrated surface and shape
Into the first image;In slit light analytical procedure, the first image obtained in image acquisition step, higher than presetting picture
In the field of element value, the maximum height position determined in image acquisition step is appointed as to the position on the protuberance 1a summits.
The step of described image obtaining step is as the first image is obtained, can be performed by a variety of methods.Wherein described
The acquisition process of one image is that slit light can be relatively moved relative to the surface of element 1, and by the surface shape with element 1
The first slit illumination into 0~90 degree of first incidence angle is penetrated in the surface of element 1, is determined by light trigonometry on the surface of element 1
Height so that obtaining slit illumination penetrates the first image formed by element 1.
Here, the slit light can differentiate according to brightness value, it is preferable to use monochromatic light such as white light is irradiated.
Also, the protrusion of spherical terminal, the bulge that the surface of the height on the surface of element 1, i.e. element 1 is formed etc.
Portion 1a height, is to be measured using the slit light of irradiation by light trigonometry.
Even if but the height of the protuberance 1a is as shown in figure 9, its summit is very high, due to the bending of slit light, in warp
Maximum can also be had by crossing the position on protuberance 1a summit.
As reference, Fig. 6 a and Fig. 6 b illustrate the irradiation patterns by slit light before protuberance summit, Fig. 7 a to 7b
The irradiation patterns of the slit light on the summit of protuberance are illustrated in, Fig. 8 a to 8b illustrate the slit light after protuberance summit
Irradiation patterns.
This causes the bending of light when slit illumination is penetrated in protuberance 1a, due to producing the bending of such light,
Will be because as produce error when determining protuberance 1a vertex positions, therefore can cause credible when vision-based detection is repeated
The problem of property is reduced.
Particularly it is a part of shape to form ball with spherical terminal identical protuberance 1a ideal form, on surface
A part has the buckling phenomenon that can make light in the case of breakage to maximize, the position on protuberance 1a summit when carrying out vision-based detection
The error occurrence cause put and the credibility detected when being repeated can be greatly reduced.
Therefore, the irradiation for the slit light that the present invention is relied on, is surveyed by light trigonometry to the height on the surface of element 1
Determine, and the image of element 1 is penetrated using slit illumination, the evaluated error of vision-based detection is minimized, even and if vision-based detection
The credibility of testing result can also be improved by being repeated.
In this regard, in described image obtaining step, slit light can be relatively moved relative to the surface of element 1, and with 0
The slit illumination of~90 degree of the first incidence angle is penetrated in the surface of element 1, and the height on the surface of element 1 is determined by light trigonometry
While spending, the first image that slit illumination penetrates the surface of element 1 is obtained.
Here, the height on the surface of the element 1 is with the more than one picture of the first height corresponding to the surface of element 1
Plain position, is patterned and determines.
The slit light analytical procedure is as such step, i.e., in the first image obtained in image acquisition step,
In field of the pixel value higher than preset value, the maximum position of height will be determined in image acquisition step and is appointed as protuberance
The position on 1a summit, can be carried out by a variety of methods.
Specifically, it is set higher than presetting pixel value in the first image obtained in image acquisition step before
Effective field.
Also, in effective field, the maximum position of height will be determined in image acquisition step and is appointed as protuberance 1a
Summit position.
Here, slit light is under the loading on the summit more than protuberance 1a, the measure height obtained by light triangle method
(H) it can also increase, but relative can diminish corresponding to the pixel value (illumination) of the slit light irradiated on the surface of slit optical element 1.
In view of this point, the slit light analytical procedure calculates pre- on the first image obtained in image acquisition step
Pixel value more than first setting value, calculates the amplitude for the slit light being irradiated on the surface of element 1, by the slit light of calculating most significantly
The position of degree is appointed as protuberance 1a vertex position.
In addition, the slit light analytical procedure is by the first image obtained in image acquiring unit and the size of element 1 and
The pixel size mapping of one image.
Also, can be from the slit of calculating if the location of pixels of the physical location and the first image on the correspondence element 1
The location of pixels of the maximum position of light amplitude, calculates the physical location on element 1.
In addition, the visible detection method of the present invention, can be performed by the 3D vision test section 720 illustrated before, but
Fig. 1 to Fig. 3 c is not limited to, as the vision-based detection module shown in Fig. 5 is performed, as long as carried out using slit light three-dimensional
The vision-based detection module of vision-based detection, any module can.
In above content, exemplary illustration is carried out for the preferred embodiments of the present invention, the scope of the present invention is not limited
Due to specific embodiment as described above, appropriate change can be carried out in the range of Patent right requirement protection domain is connected on.
Claims (12)
1. a kind of element processor, it is characterised in that including:
Loading part (100), is mounted with pallet (2), and makes pallet (2) linearly mobile, and the pallet (2) is equipped with multiple element (1);
First bottom surface vision-based detection portion (410), it is vertical with the transfer direction of the pallet (2) in the loading part (100), and set
Put and come to perform visual inspection to element (1) in the loading part (100) side;
First guide rail (680), the moving direction arranged perpendicular with the pallet (2) of the loading part (100);
First transfer instrument (610), is combined with first guide rail (680), and then mobile along first guide rail (680),
And in order to perform visual inspection, the first bottom surface vision-based detection portion is transplanted on from the loading part (100) pickup device (1)
(410);
Vision-based detection portion (420) above first, is combined with first guide rail (680), and transfers instrument (610) with described first
Mobile phase interlock and move, described first transfer instrument (610) be moved to the first bottom surface vision-based detection portion (410)
When, to being checked above the element (1) on the pallet (2) equipped with the loading part (100);And
Uninstalling portion (310,320,330), the pallet for the element (1) being equipped with after visual inspection is received from the loading part (100)
(2), according to the visual inspection result in the pallet (2) classification element (1).
2. element processor according to claim 1, it is characterised in that also include:
Second guide rail (690), configuration parallel with first guide rail (680);
Second bottom surface vision-based detection portion (430), is arranged on the side of the loading part (100), in the loading part (100)
The transfer direction of pallet (2) is perpendicular, and vision-based detection is carried out to element (1);And
Second transfer instrument (630), is combined with second guide rail (690), and is moved along second guide rail (690),
In order to perform vision-based detection, element (1) is picked up from the loading part (100) and the second bottom surface vision-based detection portion is transferred to
(430)。
3. element processor according to claim 2, it is characterised in that also include:
Vision-based detection portion (440) above second, is combined with second guide rail (690), and transfers instrument (630) with described second
Mobile phase interlock and move, described first transfer instrument (610) be moved to the first bottom surface vision-based detection portion (410)
When, to being checked above the element (1) on the pallet (2) equipped with the loading part (100).
4. the element processor according to any one in claims 1 to 3, it is characterised in that also include:
On vision-based detection portion (450) above 3rd, the transfer path for being arranged on the pallet (2) in the loading part (100), to member
Part (1) carries out vision-based detection.
5. the element processor according to any one in claims 1 to 3, it is characterised in that
Described 3rd above vision-based detection portion 450 form vertical with the transfer path of the pallet 2 of the loading part 100, and along water
Square to linearly being moved.
6. the element processor according to any one in claims 1 to 3, it is characterised in that
The first bottom surface vision-based detection portion (410) includes:
Two-dimensional visual test section (710), it includes:First image acquiring unit (712), in order to carry out 2D vision-based detections, to described
The bottom surface of the element (1) of first transfer instrument (610) pickup carries out image acquisition, and the first light source portion (711), in order to described
The image of first image acquiring unit (712) is obtained, and light is carried out to the bottom surface of the element 1 of the described first transfer instrument (610) pickup
Irradiation;And
3D vision test section (720), it includes:Second image acquiring unit (722), in order to carry out 3D vision-based detections, to described
The bottom surface for the element (1) that first transfer instrument (610) is picked up and transferred carries out image acquisition, and secondary light source portion (721), is
The image of second image acquiring unit (722) is obtained, the element that the described first transfer instrument (610) is picked up and transferred
(1) bottom surface carries out the irradiation of light.
7. a kind of visible detection method, the element for forming multiple spherical form protuberances (1a) on surface, to multiple protrusions
Portion (1a) performs vision-based detection, it is characterised in that including:
Image acquisition step, including the 3rd image for obtaining the first image, the second image and three-dimensional, described first image
Acquisition be by with the surface of the element (1) formed 0~45 degree the first incident angles the first incident light in table
The image that face is produced, the acquisition of second image be by with formed with the surface of the element (1) 45~90 degree second
The image that second incident light of incident angles is produced on surface, the institute that the 3rd image is formed on the element (1) surface
State the 3rd three-dimensional image of protuberance (1a);
3D shape characteristic holds step, on the basis of described first image and second image, holds the protuberance
The position of (1a) and 3D shape characteristic, store into 3D shape characteristic information;And
Profile mends intermediate step, and the 3D shape characteristic information of step is held as base to be stored in the 3D shape characteristic
Standard, between the three-dimensional profile of the 3rd image is mended.
8. visible detection method according to claim 7, it is characterised in that
The three-dimensional shape features, which hold step, to be included:
Shade domain analysis step, passes through multiple shades formed by corresponding with the protuberance (1a) in described first image
Field, holds the position of the protuberance (1a), corresponding with the protuberance (1a) with second image and formed
Multiple shade fields on the inside of compare, if there is brighter field to be held;And
Characteristic storing step, in the shade domain analysis step, if there is the shade corresponding with the shade field
Brighter field is compared in field, then the protuberance (1a) is had into shade corresponding with the upper part of the protuberance (1a)
The information of the flat part of the shinny field size in field is stored in the 3D shape characteristic information,
The profile mends intermediate step, be stored in the corresponding protuberance (1a) of the 3D shape characteristic information have with it is described
The flat partial information of the shinny field size in the corresponding shade field in upper part of protuberance (1a) is as benchmark, by institute
State the 3rd image three-dimensional profile mended between.
9. vision-based detection step according to claim 7, it is characterised in that
The 3D shape characteristic, which holds step, includes characteristic storing step, is with the size of the ring part of described first image
On the basis of, the center location information of the protuberance (1a) on the surface of element (1) is stored in the 3D shape special
Property information,
The profile mends the protrusion that intermediate step is the surface of the element (1) to be stored in the 3D shape characteristic information
On the basis of the center location information in portion (1a), between the three-dimensional profile of the 3rd image is mended.
10. a kind of visible detection method, its element (1) for forming the protuberance (1a) of multiple spherical forms on surface is right
The multiple protuberance (1a) performs vision-based detection, it is characterised in that including:
Image acquisition step, slit light is relatively moved relative to the surface of the element (1), and the slit illumination is penetrated in the member
The surface of part (1), and its first incidence angle and the angle of 0~90 degree of the surface formation of the element (1), pass through light trigonometry
While determining the height on the element (1) surface, the surface of the element (1) that the slit illumination is penetrated formed the
One image is obtained;And
In slit light analytical procedure, the described first image obtained in described image obtaining step, from higher than presetting picture
In the field of element value, the maximum position of the height determined in described image obtaining step is appointed as to the protuberance (1a) top
Point position.
11. visible detection method according to claim 10, it is characterised in that
The protuberance (1a) is spherical terminal.
12. visible detection method according to claim 10, it is characterised in that
The slit just monochromatic light.
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2015-0020102 | 2015-02-10 | ||
KR20150020102 | 2015-02-10 | ||
KR1020150191013A KR102059140B1 (en) | 2015-02-10 | 2015-12-31 | Device handler, and vision inspection method |
KR10-2015-0191013 | 2015-12-31 | ||
KR10-2015-0191078 | 2015-12-31 | ||
KR1020150191078A KR102059139B1 (en) | 2015-12-31 | 2015-12-31 | Vision inspection method |
PCT/KR2016/001243 WO2016129870A1 (en) | 2015-02-10 | 2016-02-04 | Component handler and vision inspection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107209128A true CN107209128A (en) | 2017-09-26 |
CN107209128B CN107209128B (en) | 2021-03-09 |
Family
ID=57850633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680009805.XA Active CN107209128B (en) | 2015-02-10 | 2016-02-04 | Component handler and visual inspection method |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN107209128B (en) |
SG (1) | SG11201706456WA (en) |
TW (1) | TWI624660B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160112A (en) * | 2020-12-30 | 2021-07-23 | 苏州特文思达科技有限公司 | Intelligent steel grating product identification and detection method based on machine learning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000074845A (en) * | 1998-08-27 | 2000-03-14 | Fujitsu Ltd | Bump inspection method and bump inspection device |
KR20010108632A (en) * | 2000-05-30 | 2001-12-08 | 한종훈 | Apparatus and Method for Inspecting Solder Ball of Semiconductor Device |
CN101484917A (en) * | 2006-06-30 | 2009-07-15 | Pnn医疗公司 | Method of identification of an element in two or more images |
CN101652848A (en) * | 2007-04-13 | 2010-02-17 | 宰体有限公司 | Be used to detect the device of semiconductor equipment |
CN101769874A (en) * | 2008-12-31 | 2010-07-07 | 宰体有限公司 | Vision inspection apparatus |
CN101826476A (en) * | 2009-03-05 | 2010-09-08 | 宰体有限公司 | Vision inspection apparatus |
CN101847572A (en) * | 2009-03-27 | 2010-09-29 | 宰体有限公司 | Semiconductor part sorting device and method for sorting thereof |
CN101887025A (en) * | 2009-05-12 | 2010-11-17 | 宰体有限公司 | Vision inspection apparatus and visible detection method thereof |
KR20120087680A (en) * | 2011-01-28 | 2012-08-07 | 한국과학기술원 | The measurement method of PCB bump height by using three dimensional shape detector using optical triangulation method |
KR20140022988A (en) * | 2012-08-14 | 2014-02-26 | (주)제이티 | Device handler and reel fixing device |
KR101454319B1 (en) * | 2010-05-10 | 2014-10-28 | 한미반도체 주식회사 | Singulation Apparatus for Manufacturing Semiconductor Packages |
WO2016129870A1 (en) * | 2015-02-10 | 2016-08-18 | (주)제이티 | Component handler and vision inspection method |
-
2016
- 2016-02-02 TW TW105103261A patent/TWI624660B/en active
- 2016-02-04 SG SG11201706456WA patent/SG11201706456WA/en unknown
- 2016-02-04 CN CN201680009805.XA patent/CN107209128B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000074845A (en) * | 1998-08-27 | 2000-03-14 | Fujitsu Ltd | Bump inspection method and bump inspection device |
KR20010108632A (en) * | 2000-05-30 | 2001-12-08 | 한종훈 | Apparatus and Method for Inspecting Solder Ball of Semiconductor Device |
CN101484917A (en) * | 2006-06-30 | 2009-07-15 | Pnn医疗公司 | Method of identification of an element in two or more images |
CN101652848A (en) * | 2007-04-13 | 2010-02-17 | 宰体有限公司 | Be used to detect the device of semiconductor equipment |
CN101769874A (en) * | 2008-12-31 | 2010-07-07 | 宰体有限公司 | Vision inspection apparatus |
CN101826476A (en) * | 2009-03-05 | 2010-09-08 | 宰体有限公司 | Vision inspection apparatus |
CN101847572A (en) * | 2009-03-27 | 2010-09-29 | 宰体有限公司 | Semiconductor part sorting device and method for sorting thereof |
CN101887025A (en) * | 2009-05-12 | 2010-11-17 | 宰体有限公司 | Vision inspection apparatus and visible detection method thereof |
KR101454319B1 (en) * | 2010-05-10 | 2014-10-28 | 한미반도체 주식회사 | Singulation Apparatus for Manufacturing Semiconductor Packages |
KR20120087680A (en) * | 2011-01-28 | 2012-08-07 | 한국과학기술원 | The measurement method of PCB bump height by using three dimensional shape detector using optical triangulation method |
KR20140022988A (en) * | 2012-08-14 | 2014-02-26 | (주)제이티 | Device handler and reel fixing device |
WO2016129870A1 (en) * | 2015-02-10 | 2016-08-18 | (주)제이티 | Component handler and vision inspection method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160112A (en) * | 2020-12-30 | 2021-07-23 | 苏州特文思达科技有限公司 | Intelligent steel grating product identification and detection method based on machine learning |
CN113160112B (en) * | 2020-12-30 | 2024-02-02 | 苏州特文思达科技有限公司 | Intelligent recognition and detection method for steel grating products based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN107209128B (en) | 2021-03-09 |
TWI624660B (en) | 2018-05-21 |
SG11201706456WA (en) | 2017-09-28 |
TW201640097A (en) | 2016-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103630549B (en) | The system and method for detection wafer | |
KR101108672B1 (en) | Vision inspection apparatus and vision inspection method therefor | |
CN101853797B (en) | For detecting the system and method for wafer | |
CN104819829B (en) | A kind of full-automatic LED charactron photodetector system and its method | |
US9704232B2 (en) | Stereo vision measurement system and method | |
US5812268A (en) | Grid array inspection system and method | |
US20090123060A1 (en) | inspection system | |
KR101083346B1 (en) | Inspection device of led chip | |
US20140113526A1 (en) | Wafer process control | |
KR102189285B1 (en) | Method of obtaining location information of dies | |
CN104515477A (en) | Three-dimensional measurement device, three-dimensional measurement method, and manufacturing method of substrate | |
US20190089301A1 (en) | System and method for solar cell defect detection | |
JP2016085045A (en) | Bump inspection device | |
CN107209128A (en) | Element processor and visible detection method | |
KR101380653B1 (en) | Vision inspection method for Vision inspection equipment | |
CN109219730B (en) | System and method for pin angle inspection using multi-view stereo vision | |
KR102059139B1 (en) | Vision inspection method | |
KR102046081B1 (en) | Vision Inspection Module and device inspection apparatus | |
CN108449975A (en) | Vision-based detection module and element testing system comprising this module | |
KR20160098640A (en) | Device handler, and vision inspection method | |
KR102059140B1 (en) | Device handler, and vision inspection method | |
US20040086198A1 (en) | System and method for bump height measurement | |
KR101030445B1 (en) | Apparatus and method for inspecting die and wire bonding of led chip | |
WO2016129870A1 (en) | Component handler and vision inspection method | |
KR20170027747A (en) | bonding wire chip inspection apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |