WO2010001973A1 - Image inspection processing device, image inspection processing method, program, and recording medium - Google Patents

Image inspection processing device, image inspection processing method, program, and recording medium Download PDF

Info

Publication number
WO2010001973A1
WO2010001973A1 PCT/JP2009/062148 JP2009062148W WO2010001973A1 WO 2010001973 A1 WO2010001973 A1 WO 2010001973A1 JP 2009062148 W JP2009062148 W JP 2009062148W WO 2010001973 A1 WO2010001973 A1 WO 2010001973A1
Authority
WO
WIPO (PCT)
Prior art keywords
luminance
halation
region
pixels
image
Prior art date
Application number
PCT/JP2009/062148
Other languages
French (fr)
Japanese (ja)
Inventor
章太 植木
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2010001973A1 publication Critical patent/WO2010001973A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/006Electronic inspection or testing of displays and display drivers, e.g. of LED or LCD displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2330/00Aspects of power supply; Aspects of display protection and defect management
    • G09G2330/10Dealing with defective pixels

Definitions

  • the present invention relates to an image inspection processing apparatus, an image inspection processing method, a program, and a recording medium for inspecting a display surface of an inspection object capable of gradation display based on a captured image of the display surface.
  • an area sensor having an image sensor such as a CCD is considered to be effective. This is because, with respect to the above-mentioned problem that individual differences are caused by visual determination, a defect can be determined based on a certain standard in an inspection using an image captured by an image sensor.
  • images taken from flat panel displays to be inspected have been improved in resolution, ensuring the resolution necessary for defect detection. Possible points are also mentioned.
  • the defect in the inspection of the flat panel display includes a low-luminance defect and a high-luminance defect having a much higher luminance than the low-luminance defect, and the light range (dynamic range) in the object is very wide.
  • an image sensor such as a CCD having a narrow dynamic range compared with visual inspection cannot detect a defect rank accurately due to undetected low-luminance defects and saturation (halation) of high-luminance defects.
  • Patent Documents 1 to 3 have the following problems.
  • the present invention has been made in view of the above problems, and its purpose is to correct the brightness of the halation area, thereby calculating the feature amount for determination to be equal to the original value, and to accurately determine the defect rank.
  • An object of the present invention is to provide an image inspection processing apparatus that enables the above.
  • an image inspection processing apparatus is an image inspection processing apparatus that inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object.
  • Calculating means for calculating the area of the halation area in the captured image, the luminance of the peripheral pixels of the halation area, the area of the area including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation area
  • an estimation means for estimating a substantial luminance value of each pixel in the halation region by applying a predetermined luminance distribution model based on the calculated values to the halation region. It is said.
  • the image inspection processing apparatus of the present invention inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object.
  • the inspection object here is a flat panel display such as a liquid crystal display panel.
  • the object to be inspected has a function of displaying gradation of characters and images on the display surface, and the image inspection processing apparatus of the present invention particularly inspects whether or not there is a pixel defect at the time of gradation display.
  • the image inspection processing device calculates various values shown below from the captured image on the display surface. That is, the area of the halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of the region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region are calculated.
  • the halation region is a region where the luminance value of the pixels constituting the captured image has reached the maximum value (for example, 255 in the case of 8 gradations).
  • the image inspection processing apparatus estimates the actual luminance value of the pixels constituting this halation area. Specifically, first, a predetermined luminance distribution model (for example, a quadratic curve model) based on each value calculated as described above is applied to the halation region. Thereby, the substantial luminance value of each pixel in the halation area is estimated. The actual luminance value thus obtained is close to the original luminance value in the halation area. Therefore, it is possible to calculate a feature amount based on the actual luminance value and determine an accurate defect rank based on the feature amount.
  • a predetermined luminance distribution model for example, a quadratic curve model
  • An image inspection processing method is an image inspection processing method executed by an image inspection processing apparatus that inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object, A calculation step for calculating the area of the halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of the region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region; An estimation step of estimating a substantial luminance value of each pixel in the halation region by applying a predetermined luminance distribution model based on the calculated values to the halation region. Yes.
  • the calculation means sets a region in which pixels having the maximum luminance value in the captured image are continuously gathered as the halation region, and sets the pixels constituting the halation region. It is preferable to calculate the area of the halation region from the number.
  • the calculation unit calculates an average luminance value of the peripheral pixels, and the estimation unit calculates the luminance distribution model based on the average luminance value in addition to the values. It is preferable to apply the halation region.
  • the calculation means calculates the sum of the number of pixels of the peripheral pixels and the number of pixels in the halation region as the area of the region including all pixels inside from the peripheral pixels. It is preferable to do.
  • the area of the region including all pixels inside from the peripheral pixels is calculated corresponding to the number of pixels of the peripheral pixels and the number of pixels in the halation region. There is a further effect that it can be calculated.
  • the calculation means calculates a center of gravity of the halation region from the shape of the halation region, and sets the center of gravity as the estimated peak position.
  • the luminance distribution model is a curve graph
  • the estimation means estimates the luminance value of the halation region by curve interpolation.
  • the estimated luminance value can be brought closer to the actual luminance distribution than the straight line graph.
  • the image inspection processing apparatus may be realized by a computer.
  • a program for realizing the image inspection processing apparatus in the computer by operating the computer as each of the above-described means and a computer-readable recording medium on which the program is recorded also fall within the scope of the present invention.
  • the image inspection processing apparatus can estimate the substantial luminance value of each pixel in the halation area in the captured image, and thus has an effect of accurately determining the rank of the defect.
  • FIG. 1, showing an embodiment of the present invention is an overall configuration diagram of an image inspection processing apparatus.
  • FIG. 1, showing an embodiment of the present invention is a block diagram of an image inspection processing apparatus.
  • FIG. 1, showing an embodiment of the present invention is a diagram illustrating a difference image generation method.
  • FIG. 4 is a flowchart of a luminance correction method for pixels in a halation region according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram illustrating a pixel having a defect in a halation region according to an embodiment of the present invention.
  • 5 is a graph illustrating an embodiment of the present invention and a method of correcting a luminance in a halation region.
  • FIG. 2 is a diagram illustrating an embodiment of the present invention and explaining point spread by PSF.
  • FIG. 5 is a diagram illustrating an embodiment of the present invention and illustrating an effect of luminance correction in a halation region.
  • FIG. 1 is a schematic diagram illustrating a configuration of an image inspection processing apparatus 1 according to the present embodiment.
  • the image inspection processing device 1 includes an imaging device 2, a control device 4, and a display device 6.
  • the imaging device 2 includes a lens 8 (optical imaging means) and an imaging element (solid-state imaging element) 10 inside the housing.
  • the image sensor 10 is composed of a plurality of pixels.
  • the control device 4 includes an image processing device 12 and an image memory 14.
  • the display device 6 includes a defect information display unit 32.
  • FIG. 2 is a block diagram of the image inspection processing apparatus 1 for carrying out the present invention.
  • the image processing device 12 included in the control device 4 includes an image data storage unit 18, a defect search unit 20, a halation defect image storage unit 22, a normal defect information storage unit 30, and a halation region luminance correction processing unit 24.
  • a feature amount calculation unit 26 and a defect evaluation unit 28 are described later.
  • the image inspection processing apparatus 1 is an apparatus that inspects the display surface of the show panel 16 (flat panel display, inspection object). In particular, a defect of a pixel constituting the display surface of the inspection panel 16 is detected.
  • the image pickup device 10 included in the image pickup apparatus 2 converts an image of the panel to be inspected 16 into an image signal
  • the control apparatus 4 detects a defect in the image data of the panel to be inspected 16 acquired by the image pickup apparatus 2.
  • the detected defect may cause so-called halation
  • the image inspection processing apparatus 1 has a function of correcting the luminance value of the area where the halation has occurred.
  • the image inspection processing device 1 corrects the generated halation well and estimates the substantial luminance value of the defective pixel.
  • defect search method a defect search method that is a pre-stage of luminance correction in the halation area will be described with reference to FIGS.
  • the image inspection processing apparatus 1 corrects the luminance when the defect is halated. On the other hand, if the defect is not halated, brightness correction is not performed.
  • the lens 8 provided in the imaging device 2 forms an image of the panel to be inspected (flat panel display) 16 on the imaging surface of the imaging element 10.
  • the image sensor 10 spatially discretizes and samples the image of the panel 16 to be inspected optically formed by the lens 8 and converts the image into an image signal.
  • the control device 4 controls the imaging device 2, stores the captured image data of the panel 16 to be inspected acquired by the imaging device 2 in the image memory 14, and searches for defects by the image processing device 12.
  • the display device 6 displays a captured image acquired by the imaging device 2, a defect search result for the captured image, and the like.
  • Image data captured by the imaging device 2 is stored in the image data storage unit 18 in the image processing device 12.
  • the defect search unit 20 searches for defects in the stored image data.
  • the defect search by the defect search unit 20 is performed by the following steps (a) to (g): difference value calculation (a), difference image generation (b), normalized contrast calculation (c), normal Binarization processing (d) of binarized contrast, calculation of area and normalized contrast in binarized region (e), addition of normalized contrast of binarized region (f), normalized contrast volume, and non-defective part Comparison with normalized contrast volume (g).
  • the processing of (a) to (d) is performed on all the pixels, and only the pixels exceeding the binarization threshold are extracted and the processing of (e) to (g) is performed.
  • the image inspection processing apparatus 1 uses a so-called difference image. With reference to FIG. 3, a method for generating a difference image will be described.
  • the defect search unit 20 includes a defective pixel 38 having a defect 36 and pixels existing around the defective pixel 38, and the same relative positional relationship as the defective pixel 38 with respect to one picture element.
  • a difference image is generated by taking the difference from the background pixel 40 to which the CCD pixel is assigned (process a) (process b).
  • the background pixel 40 to be compared with the defective pixel 38 uses the average of several pixels near the defective pixel 38 as much as possible. Thereby, the difference between the defective pixel 38 and the background pixel 40 becomes clear.
  • the defect search unit 20 normalizes the difference value in the luminance of the background pixel 40 (processing c). This normalized difference value is called normalized contrast and is used as one index representing the feature of the defect.
  • the defect search unit 20 sets a threshold value for the normalized contrast and performs binarization processing on all the pixels (processing d). A group in which pixels exceeding the binarization threshold are gathered adjacent to each other in the binarization process is defined as a binarization region.
  • the defect search unit 20 calculates the normalized contrast for the area in the binarized region and each pixel constituting the binarized region (processing e).
  • the defect search unit 20 calculates a normalized contrast volume by adding all the normalized contrasts of the pixels in the group of binarized regions (processing f). This value is a value including the size and brightness of the defect.
  • the maximum normalized contrast volume in an area where no defect exists is defined as a non-defective part normalized contrast volume.
  • the non-defective part normalized contrast volume is theoretically 0, but in actuality, the value is calculated by the influence of noise or the like. Therefore, this value is the normalized contrast volume when there is no defect.
  • the defect search unit 20 detects the binarized area as a defect (processing g).
  • the determination index in defect detection is not limited to the normalized contrast volume, and brightness or a difference value may be used.
  • Information on defects detected by these processes is stored in the halation defect image storage unit 22 or the normal defect information storage unit 30.
  • the defective pixel 38 is examined, and if the luminance reaches 255, it is regarded as halation, and the halation defect image storage unit 22 stores the image. On the other hand, the defective pixel 38 having a luminance of less than 255 is considered not to be halated. Since non-halation pixels are not corrected, the normal defect information storage unit 30 stores only defect information. This is the pre-stage for correcting the luminance of the halation area.
  • the halation area luminance correction processing unit 24 in FIG. 2 corrects the luminance of the defective pixel 38 to a substantial luminance.
  • the feature amount calculation unit 26 calculates a feature amount according to the actual luminance.
  • the defect evaluation unit 28 determines an accurate defect rank. The processes in the halation area luminance correction processing unit 24, the feature amount calculation unit 26, and the defect evaluation unit 28 will be described below.
  • the image processing apparatus 12 determines the original reasonable defect rank by correcting the luminance of the pixel in the halation region where the defect exists to obtain the actual luminance.
  • a method for correcting the luminance of the pixel in the halation area will be described with reference to FIGS.
  • FIG. 4 is a flowchart of a pixel luminance correction method in the halation region.
  • the image processing apparatus 12 includes an area (Sh) of a halation area in a captured image, an average luminance (In) of peripheral pixels in the halation area, and an area including all pixels inside the peripheral pixel (peripheral area).
  • the area (Sn) of the pixel (including the pixel) and the estimated peak position of the luminance in the halation region (the center of gravity of the halation region) are obtained, and a luminance distribution model based on the obtained values is created.
  • the image processing apparatus 12 creates a luminance distribution model using three pixels: a halation boundary pixel, a peripheral pixel that is one pixel outside the halation boundary pixel, and a peripheral pixel that is two pixels outside the halation boundary pixel.
  • the peripheral pixels are pixels outside the halation area by n pixels.
  • FIG. 5 is a schematic view showing a pixel having a defect in the halation region.
  • the halation defect image storage unit 22 stores an image including a pixel having a defect determined to be halated and a predetermined number of pixels in the vicinity thereof.
  • the hatched portion near the center shown in FIG. 5A is a pixel 42 having a defect in the halation region, and the gray portion is a pixel 44 having a defect in the non-halation region.
  • FIG. 5B is a diagram in which the shape of the defect in FIG. 5A is simplified as a symmetric circular model.
  • the halation area luminance correction processing unit 24 corrects the pixels 42 and 42 ′ having defects in the halation area to calculate the actual luminance.
  • the halation region luminance correction processing unit 24 includes the area Sh of the defective pixel 42 ′ in the halation region in FIG. 5B, the average luminance In of the peripheral pixels outside the n pixels from the halation region, and the inner pixels from the peripheral pixels.
  • the area Sn of the region including all is calculated.
  • the area Sh of the defective pixel 42 ′ in the halation region is the area of the hatched portion in FIG. 5A and corresponds to the number of pixels in the halation region.
  • FIG. 6 is a graph showing a luminance correction method in the halation area. The luminance with respect to the defect coordinate position when the defect assumed to be circular is viewed one-dimensionally in the diameter direction is shown.
  • FIG. 6A is a graph showing the luminance distribution of the halation model.
  • FIG. 6B is an explanatory diagram of luminance quadratic curve interpolation.
  • the area of the halation region is Sh
  • the area of the region including all the pixels inside from the peripheral pixels outside the halation region by n pixels is Sn, so that the respective radii can be expressed as ⁇ Sh and ⁇ Sn. .
  • the center of this circle is considered as the center of gravity of the halation region.
  • the luminance of the pixel 46 in the halation region is a saturated value of 255, which is different from the actual luminance in the hatched portion shown in FIG. Therefore, it cannot be used for correcting the substantial luminance.
  • the halation area luminance correction processing unit 24 calculates the luminance of the halation area boundary pixel 52 shown in FIG. 6B as a first stage of correction.
  • the actual luminance of the halation region boundary pixel 52 is calculated from the average luminance In and the offset amount I0 outside one pixel around the halation region boundary pixel 52.
  • Offset amount calculation method Next, a method for calculating the offset amount I0 will be described with reference to FIG.
  • FIG. 7 is an explanatory diagram of PSF (Point Spread Function).
  • PSF Point Spread Function
  • the image processing device 12 calculates the offset amount I0.
  • This offset amount I0 is applied as the difference in luminance I1 between the halation boundary pixel 52 shown in FIG. 6B and the pixel 54 outside that one pixel.
  • the estimated actual luminance of the halation boundary pixel 52 is I1 + I0.
  • PSF differs depending on the optical system (mainly lens performance), so not all cases can be covered by I1 + I0 used this time.
  • the actual luminance can be calculated for each optical system to be used by measuring the PSF for each optical system to be used and obtaining the offset amount.
  • the halation area luminance correction processing unit 24 estimates the luminance distribution of the halation area.
  • the luminance peak position of the halation region is the center of the circle, and the luminance is symmetric with respect to the peak position. Yes.
  • a model with the peak position as the origin in the X-axis direction can be considered.
  • the coordinates of the halation boundary pixel 52 are ( ⁇ Sh, I1 + I0) when the horizontal axis is the position and the vertical axis is the luminance as shown in FIG. , In).
  • the image processing device 12 performs interpolation with a quadratic curve.
  • other curves such as a quadratic or higher N-order curve and a Gaussian curve may be used.
  • x represents a pixel position
  • I represents luminance at the position of x
  • a, b, and c are coefficients of a quadratic curve, respectively.
  • c is the intercept at the Y axis, that is, the luminance at the peak position.
  • the coefficients in this formula 1 are expressed by the following formula (2) using the halation boundary pixels ( ⁇ Sh, I1 + I0), the peripheral pixels ( ⁇ S1, I1) outside the one pixel, and the peripheral pixels ( ⁇ S2, I2) outside the two pixels. , (3) and (4).
  • the halation area luminance correction processing unit 24 applies the obtained luminance distribution model of the quadratic curve to the pixel 42 having a defect in the actual halation area.
  • the center of gravity of the halation region is determined as the luminance peak position of the model.
  • the distance from each pixel to the peak position is the distance from the center of gravity to each pixel.
  • the halation area luminance correction processing unit 24 calculates the corrected luminance of each pixel 42 having a defect in the halation area, and stores the luminance information as a matrix.
  • the feature amount calculation unit 26 calculates the normalized contrast volume for the pixel 42 having a defect in the halation region using the luminance information.
  • the defect evaluation unit 28 determines the defect rank based on the calculated normalized contrast volume.
  • FIG. 8 is a diagram showing the normalized contrast volume with respect to the defect rank.
  • a circle 60 in FIG. 8 indicates a halation region.
  • the normalized contrast volume before luminance correction in the halation region has a flaw between the non-halation condition, that is, the actual feature amount. In other words, the difference between defect ranks is small.
  • the normalized contrast volume almost coincides with the condition that no halation is performed, and the substantial feature amount can be corrected.
  • the normalized contrast volume is used to reflect all of the defect brightness, normalized contrast, and area in the result.
  • the evaluation index is not limited thereto, and the brightness, normalized contrast, and area are not limited thereto. Any of these may be used.
  • each block included in the image processing apparatus 12 may be configured by hardware logic. Alternatively, it may be realized by software using a CPU (Central Processing Unit) as follows.
  • CPU Central Processing Unit
  • the image processing apparatus 12 includes a CPU that executes instructions of web pages that realize each function, a ROM (Read Only Memory) that stores the web pages, and a RAM (Random Access Memory) that expands the web pages into an executable format. And a storage device (recording medium) such as a memory for storing the web page and various data.
  • a predetermined recording medium such as a hard disk drive, a solid state drive, or a hard disk drive, and a flash memory.
  • This recording medium only needs to record the program code (execution format program, intermediate code program, source program) of the web page of the image processing apparatus 12 which is software for realizing the above-described functions so as to be readable by a computer.
  • This recording medium is supplied to the image processing apparatus 12.
  • the image processing apparatus 12 or CPU or MPU as a computer may read and execute the program code recorded on the supplied recording medium.
  • the recording medium that supplies the program code to the image processing apparatus 12 is not limited to a specific structure or type. That is, the recording medium includes, for example, a tape system such as a magnetic tape and a cassette tape, a magnetic disk such as a floppy (registered trademark) disk / hard disk, and an optical disk such as a CD-ROM / MO / MD / DVD / CD-R. System, a card system such as an IC card (including a memory card) / optical card, or a semiconductor memory system such as a mask ROM / EPROM / EEPROM / flash ROM.
  • a tape system such as a magnetic tape and a cassette tape
  • a magnetic disk such as a floppy (registered trademark) disk / hard disk
  • an optical disk such as a CD-ROM / MO / MD / DVD / CD-R.
  • a card system such as an IC card (including a memory card) / optical card
  • a semiconductor memory system such as
  • the object of the present invention can be achieved even if the image processing device 12 is configured to be connectable to a communication network.
  • the program code is supplied to the image processing apparatus 12 via a communication network.
  • the communication network is not limited to a specific type or form as long as it can supply program codes to the image processing apparatus 12.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication network, etc. may be used.
  • the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • a specific configuration or type for example, even wired such as IEEE 1394, USB (Universal Serial Bus), power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared such as IrDA or remote control, Bluetooth (registered trademark), 802.11
  • radio such as radio, HDR, mobile phone network, satellite line, terrestrial digital network.
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
  • the present invention can also be expressed as follows, for example.
  • a means for extracting the luminance value and area of the halation region and its surrounding pixels in the captured image a means for calculating a peak position of the substantial luminance from the shape of the halation region, a position of the peripheral pixel and the substantial It comprises means for estimating the actual luminance value of the pixel in the halation region from the positional relationship with the luminance peak position and the peripheral pixel luminance value, and the shape of the luminance distribution in the halation region is corrected to the actual luminance distribution, Image inspection processing device.
  • An image inspection processing apparatus comprising means for performing
  • the image inspection processing apparatus of the present invention can be widely applied to general image inspection processing apparatuses that perform defect inspection of a panel to be inspected by imaging the panel to be inspected by an imaging apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Testing Of Optical Devices Or Fibers (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Image Analysis (AREA)

Abstract

An image processing device (12) for inspecting an object to be inspected which is capable of gradation display according to a captured image of the display surface of the object to be inspected is provided with a defect search section (20) for calculating the area of a halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of a region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region, and a halation region luminance correction processing section (24) for estimating the real luminance value of each of the pixels in the halation region by applying a predetermined luminance distribution model based on each of the calculated values to the halation region.  This enables the accurate determination of a defect rank for a defect with high luminance in the halation region in an image inspection while ensuring the sensitivity for detecting a defect with low luminance.

Description

画像検査処理装置、画像検査処理方法、プログラム、及び、記録媒体Image inspection processing apparatus, image inspection processing method, program, and recording medium
 本発明は、階調表示が可能な被検査物の表示面を当該表示面の撮像画像に基づき検査する画像検査処理装置、画像検査処理方法、プログラム、及び記録媒体に関する。 The present invention relates to an image inspection processing apparatus, an image inspection processing method, a program, and a recording medium for inspecting a display surface of an inspection object capable of gradation display based on a captured image of the display surface.
 フラットパネルディスプレイの検査において、検出すべき欠陥は種類、サイズ及び輝度等は様々である。したがって、目視による判定ではこれらの欠陥の判定に個人差が生じてしまい、一定の基準での判定が困難であるといった問題がある。 In the inspection of flat panel displays, there are various types of defects to be detected, size, brightness, and the like. Therefore, there is a problem that the individual judgment is caused in the judgment of these defects in the judgment by visual observation, and it is difficult to judge based on a certain standard.
 フラットパネルディスプレイにおける画像検査において、CCD等の撮像素子を有するエリアセンサが有効であると考えられている。なぜなら、目視による判定では個人差が生じてしまうという上記の問題に対して、撮像素子により撮像した画像を用いた検査では欠陥を一定の基準において判定できるからである。また、撮像素子の高密度化、画素ずらし法及び超解像技術等の発達により、検査対象であるフラットパネルディスプレイを撮像した画像は高解像化しており、欠陥検出に必要な分解能の確保が可能な点も挙げられる。 In an image inspection on a flat panel display, an area sensor having an image sensor such as a CCD is considered to be effective. This is because, with respect to the above-mentioned problem that individual differences are caused by visual determination, a defect can be determined based on a certain standard in an inspection using an image captured by an image sensor. In addition, due to the development of high-density imaging devices, pixel shifting methods, super-resolution technology, etc., images taken from flat panel displays to be inspected have been improved in resolution, ensuring the resolution necessary for defect detection. Possible points are also mentioned.
 しかしながら、フラットパネルディスプレイの検査における欠陥には、低輝度欠陥及び低輝度欠陥よりもはるかに輝度の高い高輝度欠陥があり、対象物における光の範囲(ダイナミックレンジ)は非常に広い。このため、目視と比較してダイナミックレンジの狭いCCD等の撮像素子においては低輝度欠陥の未検出、及び高輝度欠陥の飽和(ハレーション)により正確な欠陥ランクの判定ができないといった問題がある。 However, the defect in the inspection of the flat panel display includes a low-luminance defect and a high-luminance defect having a much higher luminance than the low-luminance defect, and the light range (dynamic range) in the object is very wide. For this reason, there is a problem that an image sensor such as a CCD having a narrow dynamic range compared with visual inspection cannot detect a defect rank accurately due to undetected low-luminance defects and saturation (halation) of high-luminance defects.
 このような問題は、高輝度の飽和(ハレーション)が起こり得る画像検査処理装置一般において生じ得る問題である。上記の問題を解決するために、以下に記載の特許文献1~特許文献3の方法が開示されている。 Such a problem is a problem that may occur in general image inspection processing apparatuses in which high luminance saturation (halation) may occur. In order to solve the above problems, methods disclosed in Patent Documents 1 to 3 described below are disclosed.
 特許文献1に記載の方法では、検査において未検出は最も避けるべき事態であるとして、低輝度欠陥の未検出を避けるために高輝度欠陥がハレーションする条件(露光時間など)を許容している。 In the method described in Patent Document 1, it is assumed that undetected in the inspection is the situation that should be avoided the most, and conditions (exposure time, etc.) in which high-luminance defects are halated are allowed in order to avoid undetected low-luminance defects.
 特許文献2に記載の方法では、撮像素子の限界電荷量、電荷量が閾値電荷量に到達した時間、及びシャッタ時間に基づいてシャッタ時間に蓄積され得た電荷量を推定し、ハレーションを原因として損なわれた輝度情報を復元することを可能にしている。 In the method disclosed in Patent Document 2, the limit charge amount of the image sensor, the time when the charge amount reaches the threshold charge amount, and the charge amount that can be accumulated in the shutter time based on the shutter time are estimated, and halation is caused. This makes it possible to restore damaged luminance information.
 特許文献3に記載の方法では、特許文献1に記載の方法と同様に、検査において未検出は最も避けるべき事態であるとして、低輝度欠陥の未検出を避けるために高輝度欠陥がハレーションする条件(露光時間など)を許容している。具体的には、画像データに含まれるノイズを除去し、ノイズ除去された画像データに対して畳み込み演算処理を行い、点欠陥部を強調し、点欠陥部が強調された画像データに基づいて、点欠陥を検出することを可能にしている。 In the method described in Patent Document 3, as in the method described in Patent Document 1, it is assumed that undetected in the inspection is a situation that should be avoided most, and the condition that a high-luminance defect is halated to avoid undetected low-luminance defects. (Exposure time, etc.) are allowed. Specifically, the noise included in the image data is removed, the convolution operation processing is performed on the image data from which the noise is removed, the point defect portion is emphasized, and based on the image data in which the point defect portion is enhanced, This makes it possible to detect point defects.
日本国公開特許公報「特開2001-228049号公報(公開日:2001年8月24日)」Japanese Patent Publication “Japanese Patent Laid-Open No. 2001-228049 (Publication Date: August 24, 2001)” 日本国公開特許公報「特開2008-17176号公報(公開日:2008年1月24日)」Japanese Patent Publication “Japanese Unexamined Patent Application Publication No. 2008-17176 (Publication Date: January 24, 2008)” 日本国公開特許公報「特開2001-228050号公報(公開日:2001年8月24日)」Japanese Patent Publication “JP 2001-228050 A (Publication Date: August 24, 2001)”
 フラットパネルディスプレイの欠陥検査においては、欠陥を単純に検出するだけでなくその輝度、サイズ、他の絵素との差、及び比(コントラスト)といった特徴量によって欠陥のランクを判定することが重要となる。なぜなら、検出した欠陥が全て検査不合格になるのではなく、欠陥のサイズ及び輝度が小さければ合格となるし、また欠陥の種類によってリペア工程が異なるからである。 In defect inspection of flat panel displays, it is important not only to detect defects but also to determine the rank of the defect based on features such as brightness, size, difference from other picture elements, and ratio (contrast). Become. This is because not all detected defects fail the inspection but pass if the defect size and brightness are small, and the repair process differs depending on the type of defect.
 しかしながら、特許文献1~特許文献3に記載の方法では以下の問題が生じる。 However, the methods described in Patent Documents 1 to 3 have the following problems.
 まず、特許文献1に記載の方法においては、低輝度欠陥の未検出を避けることはできるが、本来の輝度情報はハレーションによって損なわれてしまうため、高輝度欠陥に対して正確な欠陥の判定ができない。 First, in the method described in Patent Document 1, it is possible to avoid the non-detection of a low-luminance defect. However, since the original luminance information is lost due to halation, accurate defect determination can be performed for a high-luminance defect. Can not.
 また、特許文献2に記載のハレーション領域における輝度の補正方法においては、撮像素子全数に対して電荷量及びシャッタ時間の測定をする必要がある。このため、特殊なハードウェアが必要となり装置化が困難となることが予想される。 Further, in the method for correcting the luminance in the halation region described in Patent Document 2, it is necessary to measure the charge amount and the shutter time with respect to the total number of image pickup devices. For this reason, special hardware is required, and it is expected that it will be difficult to implement the apparatus.
 また、特許文献3に記載の方法においては、特許文献1に記載の方法と同様に、低輝度欠陥の未検出を避けることはできるが、本来の輝度情報はハレーションによって損なわれてしまうため、高輝度欠陥に対して正確な欠陥の判定ができない。 Further, in the method described in Patent Document 3, as in the method described in Patent Document 1, undetected low-luminance defects can be avoided, but the original luminance information is lost due to halation. An accurate defect cannot be determined with respect to a luminance defect.
 本発明は上記の問題に鑑みなされたものであり、その目的は、ハレーション領域の輝度を補正することによって、判定のための特徴量を本来の値と同等に算出し、正確な欠陥ランクの判定を可能にする画像検査処理装置を提供することにある。 The present invention has been made in view of the above problems, and its purpose is to correct the brightness of the halation area, thereby calculating the feature amount for determination to be equal to the original value, and to accurately determine the defect rank. An object of the present invention is to provide an image inspection processing apparatus that enables the above.
 上記課題を解決するために、本発明に係る画像検査処理装置は、階調表示が可能な被検査物を、当該被検査物における表示面の撮像画像に基づき検査する画像検査処理装置であって、上記撮像画像におけるハレーション領域の面積、当該ハレーション領域の周辺画素の輝度、当該周辺画素から内側の画素全てを含んだ領域の面積、当該ハレーション領域内の輝度の推定ピーク位置をそれぞれ算出する算出手段と、上記算出された各値に基づく所定の輝度分布モデルを上記ハレーション領域に適用することによって、当該ハレーション領域内の各画素の実質輝度値を推定する推定手段と、を備えていることを特徴としている。 In order to solve the above-described problems, an image inspection processing apparatus according to the present invention is an image inspection processing apparatus that inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object. Calculating means for calculating the area of the halation area in the captured image, the luminance of the peripheral pixels of the halation area, the area of the area including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation area And an estimation means for estimating a substantial luminance value of each pixel in the halation region by applying a predetermined luminance distribution model based on the calculated values to the halation region. It is said.
 上記の構成によれば、本発明の画像検査処理装置は、階調表示が可能な被検査物を、当該被検査物における表示面の撮像画像に基づき検査する。ここでいう被検査物とはたとえば液晶表示パネルなどのフラットパネルディスプレイのことである。当該被検査物は文字や画像を表示面に階調表示する機能を有しており、本発明の画像検査処理装置は、当該階調表示時に画素の欠陥があるか否かを特に検査する。 According to the above configuration, the image inspection processing apparatus of the present invention inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object. The inspection object here is a flat panel display such as a liquid crystal display panel. The object to be inspected has a function of displaying gradation of characters and images on the display surface, and the image inspection processing apparatus of the present invention particularly inspects whether or not there is a pixel defect at the time of gradation display.
 画像検査処理装置は、表示面の撮像画像から、次に示す各種の値を算出する。すなわち、上記撮像画像におけるハレーション領域の面積、当該ハレーション領域の周辺画素の輝度、当該周辺画素から内側の画素全てを含んだ領域の面積、当該ハレーション領域内の輝度の推定ピーク位置をそれぞれ算出する。ここでいうハレーション領域とは、撮像画像を構成する画素の輝度値が最大値(たとえば8階調の場合は255)に達してしまった領域のことである。 The image inspection processing device calculates various values shown below from the captured image on the display surface. That is, the area of the halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of the region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region are calculated. Here, the halation region is a region where the luminance value of the pixels constituting the captured image has reached the maximum value (for example, 255 in the case of 8 gradations).
 画像検査処理装置は、このハレーション領域を構成する画素の実質輝度値を推定する。具体的には、まず、上述の通りに算出した各値に基づく所定の輝度分布モデル(たとえば二次曲線モデル)を上記ハレーション領域に適用する。これにより、当該ハレーション領域内の各画素の実質輝度値を推定する。こうして得た実質輝度値は、ハレーション領域における本来の輝度値に近いものとなる。したがって、この実質輝度値に基づく特徴量を算出し、この特徴量に基づく正確な欠陥ランクを判定することが可能になる。 The image inspection processing apparatus estimates the actual luminance value of the pixels constituting this halation area. Specifically, first, a predetermined luminance distribution model (for example, a quadratic curve model) based on each value calculated as described above is applied to the halation region. Thereby, the substantial luminance value of each pixel in the halation area is estimated. The actual luminance value thus obtained is close to the original luminance value in the halation area. Therefore, it is possible to calculate a feature amount based on the actual luminance value and determine an accurate defect rank based on the feature amount.
 以上のように、画像検査処理装置によれば、従来の目視検査と齟齬のない正確な欠陥ランクの判定が可能になるという効果を奏する。 As described above, according to the image inspection processing apparatus, there is an effect that it is possible to accurately determine the defect rank without defects with the conventional visual inspection.
 本発明に係る画像検査処理方法は、階調表示が可能な被検査物を、当該被検査物における表示面の撮像画像に基づき検査する画像検査処理装置が実行する画像検査処理方法であって、上記撮像画像におけるハレーション領域の面積、当該ハレーション領域の周辺画素の輝度、当該周辺画素から内側の画素全てを含んだ領域の面積、当該ハレーション領域内の輝度の推定ピーク位置をそれぞれ算出する算出ステップと、上記算出された各値に基づく所定の輝度分布モデルを上記ハレーション領域に適用することによって、当該ハレーション領域内の各画素の実質輝度値を推定する推定ステップと、を備えていることを特徴としている。 An image inspection processing method according to the present invention is an image inspection processing method executed by an image inspection processing apparatus that inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object, A calculation step for calculating the area of the halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of the region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region; An estimation step of estimating a substantial luminance value of each pixel in the halation region by applying a predetermined luminance distribution model based on the calculated values to the halation region. Yes.
 上記の構成によれば、本発明に係る画像検査処理装置と同様の作用効果を奏する。 According to the above configuration, the same effects as the image inspection processing apparatus according to the present invention can be obtained.
 本発明に係る画像検査処理装置において、上記算出手段は、上記撮像画像において輝度値が最大値となっている画素が連続して集まった領域を上記ハレーション領域とし、当該ハレーション領域を構成する画素の数から上記ハレーション領域の面積を算出することが好ましい。 In the image inspection processing apparatus according to the present invention, the calculation means sets a region in which pixels having the maximum luminance value in the captured image are continuously gathered as the halation region, and sets the pixels constituting the halation region. It is preferable to calculate the area of the halation region from the number.
 上記の構成によれば、上記ハレーション領域の面積を上記ハレーション領域を構成する画素の数に対応して算出するので、ハレーション領域における面積を正確に算出できるという更なる効果を奏する。 According to the above configuration, since the area of the halation region is calculated corresponding to the number of pixels constituting the halation region, there is an additional effect that the area in the halation region can be accurately calculated.
 本発明に係る画像検査処理装置において、上記算出手段は、上記周辺画素の平均輝度値を算出し、上記推定手段は、上記各値に加えて上記平均輝度値にも基づく上記輝度分布モデルに、上記ハレーション領域を適用することが好ましい。 In the image inspection processing apparatus according to the present invention, the calculation unit calculates an average luminance value of the peripheral pixels, and the estimation unit calculates the luminance distribution model based on the average luminance value in addition to the values. It is preferable to apply the halation region.
 上記の構成によれば、上記周辺画素の平均輝度値に基づく上記輝度分布モデルを算出するため、より正確な上記ハレーション領域における実質輝度値を算出することができるという更なる効果を奏する。 According to the above configuration, since the luminance distribution model based on the average luminance value of the peripheral pixels is calculated, there is a further effect that the actual luminance value in the halation region can be calculated more accurately.
 本発明に係る画像検査処理装置において、上記算出手段は、上記周辺画素の画素数と上記ハレーション領域における画素数との和を、上記周辺画素から内側の画素全てを含んだ領域の上記面積として算出することが好ましい。 In the image inspection processing apparatus according to the present invention, the calculation means calculates the sum of the number of pixels of the peripheral pixels and the number of pixels in the halation region as the area of the region including all pixels inside from the peripheral pixels. It is preferable to do.
 上記の構成によれば、上記周辺画素から内側の画素全てを含んだ領域の面積を、上記周辺画素の画素数と上記ハレーション領域における画素数とに対応して算出するので、該面積を正確に算出できるという更なる効果を奏する。 According to the above configuration, the area of the region including all pixels inside from the peripheral pixels is calculated corresponding to the number of pixels of the peripheral pixels and the number of pixels in the halation region. There is a further effect that it can be calculated.
 本発明に係る画像検査処理装置において、上記算出手段は、上記ハレーション領域の形状から当該ハレーション領域の重心を算出し、当該重心を上記推定ピーク位置とする事が好ましい。 In the image inspection processing apparatus according to the present invention, it is preferable that the calculation means calculates a center of gravity of the halation region from the shape of the halation region, and sets the center of gravity as the estimated peak position.
 上記の構成によれば、上記ハレーション領域の形状から当該ハレーション領域の重心を算出し、当該重心を上記推定ピーク位置とするので、推定ピーク位置をより正確に算出することができるという更なる効果を奏する。 According to said structure, since the gravity center of the said halation area | region is calculated from the shape of the said halation area | region, and the said gravity center is made into the said estimated peak position, the further effect that an estimated peak position can be calculated more correctly is produced. Play.
 本発明に係る画像検査処理装置において、上記輝度分布モデルは曲線グラフであり、上記推定手段は、曲線補間によって、上記ハレーション領域の輝度値を推定することが好ましい。 In the image inspection processing apparatus according to the present invention, it is preferable that the luminance distribution model is a curve graph, and the estimation means estimates the luminance value of the halation region by curve interpolation.
 上記の構成によれば、上記輝度分布モデルは曲線グラフであるので、直線グラフよりも、推定する輝度値をより実際の輝度分布に近づけることができるという更なる効果を奏する。 According to the above configuration, since the luminance distribution model is a curve graph, the estimated luminance value can be brought closer to the actual luminance distribution than the straight line graph.
 なお、上記画像検査処理装置は、コンピュータによって実現してもよい。この場合、コンピュータを上記各手段として動作させることにより上記画像検査処理装置をコンピュータにおいて実現するプログラム、およびそのプログラムを記録したコンピュータ読み取り可能な記録媒体も、本発明の範疇に入る。 Note that the image inspection processing apparatus may be realized by a computer. In this case, a program for realizing the image inspection processing apparatus in the computer by operating the computer as each of the above-described means and a computer-readable recording medium on which the program is recorded also fall within the scope of the present invention.
 本発明に係る画像検査処理装置は、以上のように、撮像画像におけるハレーション領域内の各画素の実質輝度値を推定できるので、欠陥のランクを正確に判定できる効果を奏する。 As described above, the image inspection processing apparatus according to the present invention can estimate the substantial luminance value of each pixel in the halation area in the captured image, and thus has an effect of accurately determining the rank of the defect.
本発明の実施形態を示すものであり、画像検査処理装置の全体構成図である。1, showing an embodiment of the present invention, is an overall configuration diagram of an image inspection processing apparatus. FIG. 本発明の実施形態を示すものであり、画像検査処理装置のブロック図である。1, showing an embodiment of the present invention, is a block diagram of an image inspection processing apparatus. FIG. 本発明の実施形態を示すものであり、差分画像生成方法を示す図である。1, showing an embodiment of the present invention, is a diagram illustrating a difference image generation method. FIG. 本発明の実施形態を示すものであり、ハレーション領域における画素の輝度補正方法のフロー図である。FIG. 4 is a flowchart of a luminance correction method for pixels in a halation region according to an embodiment of the present invention. 本発明の実施形態を示すものであり、ハレーション領域における欠陥がある画素を表した概略図である。FIG. 2 is a schematic diagram illustrating a pixel having a defect in a halation region according to an embodiment of the present invention. 本発明の実施形態を示すものであり、ハレーション領域における輝度の補正方法を示すグラフである。5 is a graph illustrating an embodiment of the present invention and a method of correcting a luminance in a halation region. 本発明の実施形態を示すものであり、PSFによる点拡がりを説明する図である。FIG. 2 is a diagram illustrating an embodiment of the present invention and explaining point spread by PSF. 本発明の実施形態を示すものであり、ハレーション領域における輝度補正による効果を示す図である。FIG. 5 is a diagram illustrating an embodiment of the present invention and illustrating an effect of luminance correction in a halation region.
 本発明の一実施形態について図面に基づいて説明すれば以下のとおりである。なお、本発明は一実施形態の画像検査処理装置に限定されず、あらゆる画像検査処理装置に適用することができる。 An embodiment of the present invention will be described below with reference to the drawings. The present invention is not limited to the image inspection processing apparatus according to the embodiment, and can be applied to any image inspection processing apparatus.
 (画像検査処理装置の構成)
 図1は、本実施形態に係る画像検査処理装置1の構成を示す概略図である。図1に示すように、画像検査処理装置1は、撮像装置2、制御装置4及び表示装置6を備えている。
(Configuration of image inspection processing device)
FIG. 1 is a schematic diagram illustrating a configuration of an image inspection processing apparatus 1 according to the present embodiment. As shown in FIG. 1, the image inspection processing device 1 includes an imaging device 2, a control device 4, and a display device 6.
 撮像装置2は、筺体の内部にレンズ8(光学的結像手段)及び撮像素子(固体撮像素子)10を備えている。撮像素子10は複数の画素からなる。 The imaging device 2 includes a lens 8 (optical imaging means) and an imaging element (solid-state imaging element) 10 inside the housing. The image sensor 10 is composed of a plurality of pixels.
 制御装置4は、画像処理装置12及び画像メモリ14を備えている。 The control device 4 includes an image processing device 12 and an image memory 14.
 表示装置6は、欠陥情報表示部32を備えている。 The display device 6 includes a defect information display unit 32.
 図2は本発明を実施する画像検査処理装置1のブロック図である。図2に示すように、制御装置4が備える画像処理装置12は、画像データ保存部18、欠陥探索部20、ハレーション欠陥画像保存部22、通常欠陥情報保存部30、ハレーション領域輝度補正処理部24、特徴量算出部26、及び欠陥評価部28を備えている。各部材の詳細については後述する。 FIG. 2 is a block diagram of the image inspection processing apparatus 1 for carrying out the present invention. As shown in FIG. 2, the image processing device 12 included in the control device 4 includes an image data storage unit 18, a defect search unit 20, a halation defect image storage unit 22, a normal defect information storage unit 30, and a halation region luminance correction processing unit 24. A feature amount calculation unit 26 and a defect evaluation unit 28. Details of each member will be described later.
 一般的に、フラットパネルディスプレイ16においては、隣接する画素よりも著しく明るい又は暗い画素、すなわち欠陥が存在している。 Generally, in the flat panel display 16, pixels that are significantly brighter or darker than adjacent pixels, that is, defects exist.
 画像検査処理装置1は、披検査パネル16(フラットパネルディスプレイ、被検査物)の表示面を検査する装置である。特に、披検査パネル16の表示面を構成する画素の欠陥を検出する。当該検査時に、撮像装置2が備える撮像素子10は被検査パネル16の像を画像信号に変換し、制御装置4は撮像装置2が取得した被検査パネル16の画像データにおける欠陥の検出を行なう。ここで、検出した欠陥がいわゆるハレーションを起こしていることがあり、画像検査処理装置1は、このハレーションを起こした領域の輝度値を補正する機能を有している。 The image inspection processing apparatus 1 is an apparatus that inspects the display surface of the show panel 16 (flat panel display, inspection object). In particular, a defect of a pixel constituting the display surface of the inspection panel 16 is detected. At the time of the inspection, the image pickup device 10 included in the image pickup apparatus 2 converts an image of the panel to be inspected 16 into an image signal, and the control apparatus 4 detects a defect in the image data of the panel to be inspected 16 acquired by the image pickup apparatus 2. Here, the detected defect may cause so-called halation, and the image inspection processing apparatus 1 has a function of correcting the luminance value of the area where the halation has occurred.
 (ハレーションの原理)
 まず、撮像画像においてハレーションが生じる要因について、以下に説明する。以下では、256階調の表示を行なう撮像素子10を用いて画像を取得した場合を考える。撮像素子10は、フォトダイオードによって光を電荷に変換し、変換した電荷を蓄える。電荷量が限界に達していなければ、撮像素子10は蓄積された電荷量に応じて0~255までの階調を表現する。しかしながら、電荷量が飽和している場合には、撮像素子10に対して露光時間内にどれだけ多くの光が照射されたとしても、階調は255で表現されてしまう。この状態がハレーションであり、このハレーションにより実質輝度は損なわれることになる。
(Principle of halation)
First, factors that cause halation in a captured image will be described below. In the following, a case is considered where an image is acquired using the image sensor 10 that displays 256 gradations. The image sensor 10 converts light into electric charges using a photodiode, and stores the converted electric charges. If the charge amount has not reached the limit, the image sensor 10 represents a gradation of 0 to 255 according to the accumulated charge amount. However, when the charge amount is saturated, the gradation is expressed by 255 no matter how much light is irradiated to the image sensor 10 within the exposure time. This state is halation, and the actual luminance is impaired by this halation.
 実質輝度が損なわれた場合に以下のような問題が生じる。すなわち、欠陥が存在するハレーション領域における輝度は全て255として表現されてしまうため、輝度に基づいて算出される実質の特徴量は、ハレーションしていない状態よりも小さくなってしまう。このため、従来の目視検査と同様に、正確な欠陥ランクの判定をすることが困難になる。したがって、欠陥検出の合否判定が安定的に行われず、未検出及び過検出の増大につながる懼れがある。 The following problems occur when the real brightness is impaired. That is, since the luminance in the halation area where the defect exists is all expressed as 255, the actual feature amount calculated based on the luminance is smaller than that in the state where no halation is performed. For this reason, it is difficult to accurately determine the defect rank as in the conventional visual inspection. Therefore, the pass / fail determination of defect detection is not performed stably, which may lead to an increase in undetected and overdetected.
 以上のことからハレーションは極力避けるべき状態と言える。しかしながら、高輝度欠陥のハレーションは避けることができない。なぜなら、低輝度の欠陥を検出するためにある程度の感度が必要となるため、露光時間を長く設定するなどの対応が必要となるからである。画像検査処理装置1は、詳しくは後述するが、発生したハレーションをうまく補正して、欠陥画素の実質輝度値を推定する。 From the above, it can be said that halation should be avoided as much as possible. However, halation of high brightness defects cannot be avoided. This is because a certain degree of sensitivity is required to detect a low-luminance defect, and it is necessary to take measures such as setting a long exposure time. As will be described in detail later, the image inspection processing device 1 corrects the generated halation well and estimates the substantial luminance value of the defective pixel.
 (欠陥探索方法)
 次に、ハレーション領域における輝度補正の前段階となる欠陥の探索方法について図2及び図3を参照して説明する。本発明の画像検査処理装置1は、欠陥がハレーションしている場合に輝度補正をする。一方、欠陥がハレーションしていなければ輝度補正はしない。
(Defect search method)
Next, a defect search method that is a pre-stage of luminance correction in the halation area will be described with reference to FIGS. The image inspection processing apparatus 1 according to the present invention corrects the luminance when the defect is halated. On the other hand, if the defect is not halated, brightness correction is not performed.
 まず、図2を参照して欠陥の探索方法の概略を説明する。撮像装置2が備えるレンズ8は、被検査パネル(フラットパネルディスプレイ)16の像を撮像素子10の撮像面上に結像させる。撮像素子10は、レンズ8によって光学的に結像された被検査パネル16の像を、空間的に離散化させてサンプリングし、それらの像を画像信号に変換する。本実施形態において、複数の撮像素子10は、例えばCCD(Charge Coupled Devices)イメージセンサ、及びCMOS(Complementary Metal-Oxide=Semiconductor)イメージセンサを使用することができる。 First, an outline of a defect search method will be described with reference to FIG. The lens 8 provided in the imaging device 2 forms an image of the panel to be inspected (flat panel display) 16 on the imaging surface of the imaging element 10. The image sensor 10 spatially discretizes and samples the image of the panel 16 to be inspected optically formed by the lens 8 and converts the image into an image signal. In the present embodiment, for example, a CCD (Charge Coupled Devices) image sensor and a CMOS (Complementary Metal-Oxide = Semiconductor) image sensor can be used as the plurality of imaging devices 10.
 制御装置4は撮像装置2を制御し、撮像装置2が取得した被検査パネル16の撮像画像データを画像メモリ14に保存し、画像処理装置12によって欠陥の探索を行なう。 The control device 4 controls the imaging device 2, stores the captured image data of the panel 16 to be inspected acquired by the imaging device 2 in the image memory 14, and searches for defects by the image processing device 12.
 表示装置6は、撮像装置2が取得した撮像画像、及び撮像画像に対しての欠陥の探索結果等を表示する。 The display device 6 displays a captured image acquired by the imaging device 2, a defect search result for the captured image, and the like.
 撮像装置2によって撮像された画像データは、画像処理装置12内の画像データ保存部18が保存する。欠陥探索部20は、保存された画像データに対して欠陥の探索を行なう。欠陥探索部20による欠陥の探索は次の(a)~(g)の工程により行われる;差分値の算出(a)、差分画像の生成(b)、正規化コントラストの算出(c)、正規化コントラストの二値化処理(d)、二値化領域における面積と正規化コントラストの算出(e)、二値化領域の正規化コントラストの足し合わせ(f)、正規化コントラスト体積と、良品部正規化コントラスト体積との比較(g)。 Image data captured by the imaging device 2 is stored in the image data storage unit 18 in the image processing device 12. The defect search unit 20 searches for defects in the stored image data. The defect search by the defect search unit 20 is performed by the following steps (a) to (g): difference value calculation (a), difference image generation (b), normalized contrast calculation (c), normal Binarization processing (d) of binarized contrast, calculation of area and normalized contrast in binarized region (e), addition of normalized contrast of binarized region (f), normalized contrast volume, and non-defective part Comparison with normalized contrast volume (g).
 欠陥の探索は、全画素に対して(a)~(d)の処理を行い、二値化閾値を越えた画素のみ抽出し(e)~(g)の処理を行なう。 In the defect search, the processing of (a) to (d) is performed on all the pixels, and only the pixels exceeding the binarization threshold are extracted and the processing of (e) to (g) is performed.
 欠陥を検出するためには、欠陥が無い画素(以下背景画素40と称する)と欠陥がある画素(以下欠陥画素38と称する)との差異を判別する必要がある。この判別のために、画像検査処理装置1は、いわゆる差分画像を利用する。図3を参照して差分画像生の成方法を説明する。 In order to detect a defect, it is necessary to determine a difference between a pixel having no defect (hereinafter referred to as background pixel 40) and a pixel having a defect (hereinafter referred to as defective pixel 38). For this determination, the image inspection processing apparatus 1 uses a so-called difference image. With reference to FIG. 3, a method for generating a difference image will be described.
 図3に示すように、欠陥探索部20は、欠陥36がある欠陥画素38と、欠陥画素38の周囲に存在する画素であって、1絵素に対して該欠陥画素38と同じ相対位置関係においてCCD画素が割り当てられている背景画素40との差分をとる(処理a)ことによって差分画像を生成する(処理b)。このとき、欠陥画素38と比較する背景画素40は極力欠陥画素38近傍のいくつかの画素の平均を用いる。これにより、欠陥画素38と背景画素40との差異が明らかになる。 As shown in FIG. 3, the defect search unit 20 includes a defective pixel 38 having a defect 36 and pixels existing around the defective pixel 38, and the same relative positional relationship as the defective pixel 38 with respect to one picture element. A difference image is generated by taking the difference from the background pixel 40 to which the CCD pixel is assigned (process a) (process b). At this time, the background pixel 40 to be compared with the defective pixel 38 uses the average of several pixels near the defective pixel 38 as much as possible. Thereby, the difference between the defective pixel 38 and the background pixel 40 becomes clear.
 また、フラットパネルディスプレイ16一般において輝度に分布(シェーディング)が存在するため、パネル各位置における輝度の値は異なる。この影響を考慮して、欠陥探索部20は、背景画素40における輝度においてその差分値を正規化する(処理c)。この正規化した差分の値を正規化コントラストと呼び、欠陥の特徴を表すひとつの指標として用いる。欠陥探索部20は、この正規化コントラストに対して閾値を設定して、全ての画素に対して二値化処理を行なう(処理d)。二値化処理において二値化閾値を超える画素が互いに隣接して集まっている一群を二値化領域とする。欠陥探索部20は、二値化領域における面積、及び二値化領域を構成する各画素に対して正規化コントラストを算出する(処理e)。欠陥探索部20は、一群の二値化領域の各画素の正規化コントラストを全て合算したものを正規化コントラスト体積として算出する(処理f)。この値は、欠陥のサイズ及び輝度を内包した値となる。 In addition, since there is a distribution (shading) in luminance in the flat panel display 16 in general, the luminance value at each position of the panel is different. Considering this influence, the defect search unit 20 normalizes the difference value in the luminance of the background pixel 40 (processing c). This normalized difference value is called normalized contrast and is used as one index representing the feature of the defect. The defect search unit 20 sets a threshold value for the normalized contrast and performs binarization processing on all the pixels (processing d). A group in which pixels exceeding the binarization threshold are gathered adjacent to each other in the binarization process is defined as a binarization region. The defect search unit 20 calculates the normalized contrast for the area in the binarized region and each pixel constituting the binarized region (processing e). The defect search unit 20 calculates a normalized contrast volume by adding all the normalized contrasts of the pixels in the group of binarized regions (processing f). This value is a value including the size and brightness of the defect.
 一方、欠陥が存在しない領域における最大の正規化コントラスト体積を、良品部正規化コントラスト体積とする。良品部正規化コントラスト体積は理論的には0であるが、実際にはノイズ等の影響により値が算出される。したがって、この値を欠陥が無い場合の正規化コントラスト体積とする。 On the other hand, the maximum normalized contrast volume in an area where no defect exists is defined as a non-defective part normalized contrast volume. The non-defective part normalized contrast volume is theoretically 0, but in actuality, the value is calculated by the influence of noise or the like. Therefore, this value is the normalized contrast volume when there is no defect.
 本実施形態においては、二値化領域の正規化コントラスト体積が良品部正規化コントラスト体積を超えている場合に、欠陥探索部20は該二値化領域を欠陥として検出する(処理g)。また、欠陥の検出における判断指標は正規化コントラスト体積に限られたものではなく、輝度又は差分値などを用いても良い。これらの処理によって検出された欠陥の情報(座標、輝度、正規化コントラスト、及び正規化コントラスト体積など)をハレーション欠陥画像保存部22又は通常欠陥情報保存部30が保存する。 In the present embodiment, when the normalized contrast volume of the binarized area exceeds the non-defective part normalized contrast volume, the defect search unit 20 detects the binarized area as a defect (processing g). In addition, the determination index in defect detection is not limited to the normalized contrast volume, and brightness or a difference value may be used. Information on defects detected by these processes (coordinates, luminance, normalized contrast, normalized contrast volume, etc.) is stored in the halation defect image storage unit 22 or the normal defect information storage unit 30.
 上述したように正確な欠陥ランクの判定をするためにはハレーション領域における輝度を実質の輝度に補正する必要がある。このため、欠陥画素38を調べて輝度が255に達しているものをハレーションしているとみなし、ハレーション欠陥画像保存部22が画像を保存する。一方、輝度が255未満の欠陥画素38はハレーションしていないとみなす。ハレーションしていない画素は補正しないので、通常欠陥情報保存部30が欠陥情報のみを保存する。ここまでが、ハレーション領域の輝度補正を行なう前段階となる。 As described above, in order to accurately determine the defect rank, it is necessary to correct the luminance in the halation area to the actual luminance. For this reason, the defective pixel 38 is examined, and if the luminance reaches 255, it is regarded as halation, and the halation defect image storage unit 22 stores the image. On the other hand, the defective pixel 38 having a luminance of less than 255 is considered not to be halated. Since non-halation pixels are not corrected, the normal defect information storage unit 30 stores only defect information. This is the pre-stage for correcting the luminance of the halation area.
 次に、ハレーションしていると判断した欠陥画素38に対して、図2におけるハレーション領域輝度補正処理部24は欠陥画素38の輝度を実質輝度に補正する。次に、特徴量算出部26は実質輝度に応じた特徴量を算出する。最後に、欠陥評価部28は正確な欠陥ランクの判定をする。上記のハレーション領域輝度補正処理部24、特徴量算出部26、及び欠陥評価部28における処理を以下に説明する。 Next, with respect to the defective pixel 38 determined to be halated, the halation area luminance correction processing unit 24 in FIG. 2 corrects the luminance of the defective pixel 38 to a substantial luminance. Next, the feature amount calculation unit 26 calculates a feature amount according to the actual luminance. Finally, the defect evaluation unit 28 determines an accurate defect rank. The processes in the halation area luminance correction processing unit 24, the feature amount calculation unit 26, and the defect evaluation unit 28 will be described below.
 (輝度補正方法の概略)
 本発明に係る画像処理装置12は、欠陥が存在するハレーション領域における画素の輝度を補正し実質輝度を求めることによって、本来の妥当な欠陥ランクの判定をする。以下に、ハレーション領域における画素の輝度の補正方法について図4~図7を参照して説明する。
(Outline of brightness correction method)
The image processing apparatus 12 according to the present invention determines the original reasonable defect rank by correcting the luminance of the pixel in the halation region where the defect exists to obtain the actual luminance. Hereinafter, a method for correcting the luminance of the pixel in the halation area will be described with reference to FIGS.
 まず、補正方法の概略を説明する。図4は、ハレーション領域における画素の輝度補正方法のフロー図である。画像処理装置12は、図4に示すように撮像画像におけるハレーション領域の面積(Sh)、該ハレーション領域の周辺画素の平均輝度(In)、該周辺画素から内側の画素全てを含んだ領域(周辺画素を含む)の面積(Sn)、ハレーション領域内の輝度の推定ピーク位置(ハレーション領域の重心)、をそれぞれ求め、求めたこれらのものに基づく輝度分布モデルを作成する。この輝度分布モデルを、ハレーション領域における欠陥がある画素に適用することによって欠陥がある画素の実質輝度を求める。本実施形態において、画像処理装置12は、ハレーション境界画素、ハレーション境界画素の1画素外側の周辺画素、及びハレーション境界画素の2画素外側の周辺画素の3画素を用いて輝度分布モデルを作成する。ここで周辺画素を、ハレーション領域からn画素外側の画素とする。 First, the outline of the correction method will be described. FIG. 4 is a flowchart of a pixel luminance correction method in the halation region. As shown in FIG. 4, the image processing apparatus 12 includes an area (Sh) of a halation area in a captured image, an average luminance (In) of peripheral pixels in the halation area, and an area including all pixels inside the peripheral pixel (peripheral area). The area (Sn) of the pixel (including the pixel) and the estimated peak position of the luminance in the halation region (the center of gravity of the halation region) are obtained, and a luminance distribution model based on the obtained values is created. By applying this luminance distribution model to the defective pixel in the halation region, the substantial luminance of the defective pixel is obtained. In the present embodiment, the image processing apparatus 12 creates a luminance distribution model using three pixels: a halation boundary pixel, a peripheral pixel that is one pixel outside the halation boundary pixel, and a peripheral pixel that is two pixels outside the halation boundary pixel. Here, the peripheral pixels are pixels outside the halation area by n pixels.
 図5は、ハレーション領域における欠陥がある画素を表した概略図である。まず、図5(a)に示すように、ハレーションしていると判定された欠陥がある画素及びその周辺の所定数における画素を含んだ画像を、ハレーション欠陥画像保存部22は保存する。図5(a)に示す中央付近の斜線部はハレーション領域における欠陥がある画素42であり、灰色部はハレーションしていない領域における欠陥がある画素44である。図5(b)は、図5(a)における欠陥の形状を対称な円形モデルとして簡易モデル化した図である。図5に示すハレーション領域における欠陥がある画素42及び42´において輝度は全て255と表現される。すなわち、実質輝度が損なわれている。この場合に、該ハレーション領域における欠陥がある画素42及び42´に対してハレーション領域輝度補正処理部24は補正を行い実質輝度を算出する。 FIG. 5 is a schematic view showing a pixel having a defect in the halation region. First, as shown in FIG. 5A, the halation defect image storage unit 22 stores an image including a pixel having a defect determined to be halated and a predetermined number of pixels in the vicinity thereof. The hatched portion near the center shown in FIG. 5A is a pixel 42 having a defect in the halation region, and the gray portion is a pixel 44 having a defect in the non-halation region. FIG. 5B is a diagram in which the shape of the defect in FIG. 5A is simplified as a symmetric circular model. In the pixels 42 and 42 ′ having defects in the halation region shown in FIG. That is, the substantial luminance is impaired. In this case, the halation area luminance correction processing unit 24 corrects the pixels 42 and 42 ′ having defects in the halation area to calculate the actual luminance.
 (輝度補正の具体的方法)
 次に、輝度の補正方法について具体的に説明する。ハレーション領域輝度補正処理部24は、図5(b)におけるハレーション領域における欠陥がある画素42´の面積Sh、ハレーション領域からn画素外側の周辺画素の平均輝度In、及び該周辺画素から内側の画素全てを含んだ領域の面積Snを算出する。ハレーション領域における欠陥がある画素42´の面積Shは、図5(a)における斜線部の面積であり、ハレーション領域における画素数に対応する。
(Specific method of brightness correction)
Next, the luminance correction method will be specifically described. The halation region luminance correction processing unit 24 includes the area Sh of the defective pixel 42 ′ in the halation region in FIG. 5B, the average luminance In of the peripheral pixels outside the n pixels from the halation region, and the inner pixels from the peripheral pixels. The area Sn of the region including all is calculated. The area Sh of the defective pixel 42 ′ in the halation region is the area of the hatched portion in FIG. 5A and corresponds to the number of pixels in the halation region.
 図6は、ハレーション領域における輝度の補正方法を示すグラフである。円形に仮定した欠陥を直径方向に一次元的に見た、欠陥座標位置に対する輝度を示している。図6(a)は、ハレーションモデルの輝度分布を示すグラフである。図6(b)は、輝度2次曲線補間説明図である。 FIG. 6 is a graph showing a luminance correction method in the halation area. The luminance with respect to the defect coordinate position when the defect assumed to be circular is viewed one-dimensionally in the diameter direction is shown. FIG. 6A is a graph showing the luminance distribution of the halation model. FIG. 6B is an explanatory diagram of luminance quadratic curve interpolation.
 欠陥の形状を図5(b)に示すような対称な円形モデルとして考えると、図6(a)に示すような、円形モデルの中心を輝度のピークとした左右対称の輝度分布が推測される。 When the shape of the defect is considered as a symmetrical circular model as shown in FIG. 5B, a symmetrical luminance distribution with a luminance peak at the center of the circular model as shown in FIG. 6A is estimated. .
このとき、ハレーション領域の面積をSh、ハレーション領域からn画素外側の周辺画素から内側の画素全てを含んだ領域の面積をSnとするので、それぞれの半径は√Sh、√Snと表すことができる。また、この円形の中心をハレーション領域の重心として考える。 At this time, the area of the halation region is Sh, and the area of the region including all the pixels inside from the peripheral pixels outside the halation region by n pixels is Sn, so that the respective radii can be expressed as √Sh and √Sn. . The center of this circle is considered as the center of gravity of the halation region.
 次に、ハレーション領域における輝度補正に用いられる画素について説明する。 Next, pixels used for luminance correction in the halation area will be described.
 図6(a)に示すようにハレーション領域における画素46の輝度は飽和状態にある255の値となり、図6(a)に示す斜線部における実質輝度と異なる。したがって、実質輝度の補正に用いることはできない。また、周辺画素48a~48cの輝度はハレーション境界部から外側になるほど、つまり48aから48cになるにつれて背景画素40との差異が小さくなる。したがって、あまり外側の画素は輝度の補正に用いることはできない。そこで、ハレーション領域輝度補正処理部24は補正の第一段階として、図6(b)に示すハレーション領域境界画素52の輝度を算出する。本実施形態では算出方法として、ハレーション領域境界画素52の周辺1画素外側の平均輝度Inとオフセット量I0とからハレーション領域境界画素52の実質輝度を算出するものとする。 As shown in FIG. 6A, the luminance of the pixel 46 in the halation region is a saturated value of 255, which is different from the actual luminance in the hatched portion shown in FIG. Therefore, it cannot be used for correcting the substantial luminance. Further, as the luminance of the peripheral pixels 48a to 48c becomes more outward from the halation boundary, that is, the difference from the background pixel 40 becomes smaller from 48a to 48c. Therefore, the pixels on the outside cannot be used for luminance correction. Therefore, the halation area luminance correction processing unit 24 calculates the luminance of the halation area boundary pixel 52 shown in FIG. 6B as a first stage of correction. In the present embodiment, as a calculation method, the actual luminance of the halation region boundary pixel 52 is calculated from the average luminance In and the offset amount I0 outside one pixel around the halation region boundary pixel 52.
 (オフセット量算出方法)
 次に図7を参照して、オフセット量I0の算出方法を説明する。
(Offset amount calculation method)
Next, a method for calculating the offset amount I0 will be described with reference to FIG.
 図7はPSF(Point Spread Function)の説明図である。背景画素40よりも輝度が高いパタンの撮像を行った場合に、その撮像画像は図7における線56に示すようにエッジの完全にたった状態で表示されるわけではなく、曲線58に示すように一般に拡がりを伴って表示される。これは、レンズを透過した光が点拡がりすることが原因であり、レンズ光学系を用いる場合には必ず考慮する必要がある。この点拡がりはPSF、いわゆる点拡がり関数に依存しており、このPSFが明らかであれば光の拡がり方を予測できる。そこで、装置における光学系のPSFを予め測定しておき、これを用いてパタンの拡がりを推測しておくことによって飽和輝度である255付近の光の拡がりの勾配を算出することができる。 FIG. 7 is an explanatory diagram of PSF (Point Spread Function). When a pattern having a higher brightness than that of the background pixel 40 is captured, the captured image is not displayed with a completely edged state as indicated by a line 56 in FIG. Generally displayed with expansion. This is because the light transmitted through the lens spreads and must be taken into account when using a lens optical system. This point spread depends on a PSF, a so-called point spread function, and if this PSF is clear, the light spread can be predicted. Therefore, by measuring the PSF of the optical system in the apparatus in advance and using this to estimate the pattern spread, it is possible to calculate the gradient of light spread around 255 that is the saturation luminance.
 この勾配から、画像処理装置12はオフセット量I0を算出する。このオフセット量I0を、図6(b)に示すハレーション境界画素52と、その1画素外側の画素54との輝度I1の差として適用する。これにより、ハレーション境界画素52の推測される実質輝度はI1+I0となる。 From this gradient, the image processing device 12 calculates the offset amount I0. This offset amount I0 is applied as the difference in luminance I1 between the halation boundary pixel 52 shown in FIG. 6B and the pixel 54 outside that one pixel. As a result, the estimated actual luminance of the halation boundary pixel 52 is I1 + I0.
 なお、光学系(主にレンズの性能)によってPSFは異なるので、今回用いたI1+I0によって全ての事例をカバーできるわけではない。しかしながら、用いる光学系ごとのPSFを測定しオフセット量を求めることにより、用いる光学系ごとに実質輝度を算出することができる。 Note that PSF differs depending on the optical system (mainly lens performance), so not all cases can be covered by I1 + I0 used this time. However, the actual luminance can be calculated for each optical system to be used by measuring the PSF for each optical system to be used and obtaining the offset amount.
 (輝度分布モデル推定方法)
 以上の情報を用いて、ハレーション領域輝度補正処理部24はハレーション領域の輝度分布を推定する。本実施形態においては、図5に示すような円形のハレーション領域を推定しているため、ハレーション領域の輝度のピーク位置は円の中心となり、ピーク位置に対して輝度が対称になる形状となっている。このため、図6(b)のようにピーク位置をX軸方向の原点としたモデルで考えることができる。このとき、図6(b)のように横軸を位置、縦軸を輝度として考えるとハレーション境界画素52の座標は(√Sh、I1+I0)となり、n画素外側の周辺画素の座標は(√Sn、In)となる。
(Brightness distribution model estimation method)
Using the above information, the halation area luminance correction processing unit 24 estimates the luminance distribution of the halation area. In the present embodiment, since the circular halation region as shown in FIG. 5 is estimated, the luminance peak position of the halation region is the center of the circle, and the luminance is symmetric with respect to the peak position. Yes. For this reason, as shown in FIG. 6B, a model with the peak position as the origin in the X-axis direction can be considered. At this time, the coordinates of the halation boundary pixel 52 are (√Sh, I1 + I0) when the horizontal axis is the position and the vertical axis is the luminance as shown in FIG. , In).
 ここで実際の画素の輝度分布について考える。ハレーションしていない画素の輝度の分布は、ピークを持つ三角錐に近い形状と考えられる。なぜなら、フラットパネルディスプレイの検査においてはコスト及びタクト面から十分な分解能が確保できない場合が多く、分解能に対して輝点のサイズが小さくなるため本来の形状が再現されないからである。補間については、実際の形状に近い補間方法を用いる必要があるが、直線補間の場合では実形状との不一致が考えられるので曲線での補間が妥当と考えられる。本実施形態においては、画像処理装置12は2次曲線での補間を実施する。また、2次曲線に限らず2次以上のN次曲線やガウシアン曲線などその他の曲線を用いても良い。2次曲線を用いた場合、この曲線の式は下記式(1)で表される。 Here we consider the actual luminance distribution of the pixels. The luminance distribution of pixels that are not halated is considered to be a shape close to a triangular pyramid having a peak. This is because in flat panel display inspection, sufficient resolution cannot be ensured from the viewpoint of cost and tact surface, and the size of the bright spot is small with respect to the resolution, so that the original shape cannot be reproduced. For interpolation, it is necessary to use an interpolation method close to the actual shape. However, in the case of linear interpolation, inconsistency with the actual shape can be considered, so interpolation with a curve is considered appropriate. In the present embodiment, the image processing device 12 performs interpolation with a quadratic curve. In addition to the quadratic curve, other curves such as a quadratic or higher N-order curve and a Gaussian curve may be used. When a quadratic curve is used, the equation of this curve is represented by the following equation (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 式(1)におけるxは画素の位置、Iはxの位置での輝度を表し、a、b及びcはそれぞれ2次曲線の係数となる。cは、Y軸との切片、つまりピーク位置での輝度となる。この1式における係数はハレーション境界画素(√Sh、I1+I0)、1画素外側の周辺画素(√S1、I1)、及び2画素外側の周辺画素(√S2、I2)を用いて下記式(2)、式(3)及び式(4)で表すことができる。 In Expression (1), x represents a pixel position, I represents luminance at the position of x, and a, b, and c are coefficients of a quadratic curve, respectively. c is the intercept at the Y axis, that is, the luminance at the peak position. The coefficients in this formula 1 are expressed by the following formula (2) using the halation boundary pixels (√Sh, I1 + I0), the peripheral pixels (√S1, I1) outside the one pixel, and the peripheral pixels (√S2, I2) outside the two pixels. , (3) and (4).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 (輝度分布モデルを適用した輝度補正方法)
 次に、ハレーション領域輝度補正処理部24は、求めた2次曲線の輝度分布モデルを実際のハレーション領域における欠陥がある画素42に適用する。その際、基準となる位置を定める必要があるので、該ハレーション領域の重心をモデルの輝度ピーク位置として定める。各画素からピーク位置までの距離は重心から各画素までの距離を適用する。これにより、ハレーション領域輝度補正処理部24は、ハレーション領域における欠陥がある各画素42の補正された輝度を算出し、この輝度情報を行列として保存する。次に、特徴量算出部26はこの輝度情報を用いて、ハレーション領域における欠陥がある画素42について正規化コントラスト体積の算出を行なう。最後に、欠陥評価部28は、算出された正規化コントラスト体積に基づいて欠陥ランクの判定を行なう。
(Luminance correction method using luminance distribution model)
Next, the halation area luminance correction processing unit 24 applies the obtained luminance distribution model of the quadratic curve to the pixel 42 having a defect in the actual halation area. At this time, since it is necessary to determine a reference position, the center of gravity of the halation region is determined as the luminance peak position of the model. The distance from each pixel to the peak position is the distance from the center of gravity to each pixel. Thereby, the halation area luminance correction processing unit 24 calculates the corrected luminance of each pixel 42 having a defect in the halation area, and stores the luminance information as a matrix. Next, the feature amount calculation unit 26 calculates the normalized contrast volume for the pixel 42 having a defect in the halation region using the luminance information. Finally, the defect evaluation unit 28 determines the defect rank based on the calculated normalized contrast volume.
 (輝度補正の効果)
 図8は、欠陥ランクに対する正規化コントラスト体積を示す図である。図8における円60はハレーション領域を示す。図8に示すように、ハレーション領域における輝度補正前の正規化コントラスト体積は、ハレーションしない条件、つまり実質の特徴量との間に齟齬がある。換言すれば、欠陥ランク間の差が小さい。しかしながら、輝度補正後の欠陥では正規化コントラスト体積はハレーションしない条件にほぼ一致しており、実質の特徴量まで補正できていることがわかる。
(Brightness correction effect)
FIG. 8 is a diagram showing the normalized contrast volume with respect to the defect rank. A circle 60 in FIG. 8 indicates a halation region. As shown in FIG. 8, the normalized contrast volume before luminance correction in the halation region has a flaw between the non-halation condition, that is, the actual feature amount. In other words, the difference between defect ranks is small. However, it can be seen that in the defect after luminance correction, the normalized contrast volume almost coincides with the condition that no halation is performed, and the substantial feature amount can be corrected.
 また、本実施形態では欠陥の輝度、正規化コントラスト、及び面積の全てを結果に反映させるために正規化コントラスト体積を用いているが、評価指標としてはそれに限らず輝度、正規化コントラスト、及び面積のいずれかを用いてもよい。 In this embodiment, the normalized contrast volume is used to reflect all of the defect brightness, normalized contrast, and area in the result. However, the evaluation index is not limited thereto, and the brightness, normalized contrast, and area are not limited thereto. Any of these may be used.
 (プログラムおよび記録媒体)
 最後に、画像処理装置12に含まれている各ブロックは、ハードウェアロジックによって構成すればよい。または、次のように、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。
(Program and recording medium)
Finally, each block included in the image processing apparatus 12 may be configured by hardware logic. Alternatively, it may be realized by software using a CPU (Central Processing Unit) as follows.
 すなわち画像処理装置12は、各機能を実現するウェブページの命令を実行するCPU、このウェブページを格納したROM(Read Only Memory)、上記ウェブページを実行可能な形式に展開するRAM(Random Access Memory)、および、上記ウェブページおよび各種データを格納するメモリ等の記憶装置(記録媒体)を備えている。この構成により、本発明の目的は、所定の記録媒体によっても、達成できる。 In other words, the image processing apparatus 12 includes a CPU that executes instructions of web pages that realize each function, a ROM (Read Only Memory) that stores the web pages, and a RAM (Random Access Memory) that expands the web pages into an executable format. And a storage device (recording medium) such as a memory for storing the web page and various data. With this configuration, the object of the present invention can be achieved by a predetermined recording medium.
 この記録媒体は、上述した機能を実現するソフトウェアである画像処理装置12のウェブページのプログラムコード(実行形式プログラム、中間コードプログラム、ソースプログラム)をコンピュータで読み取り可能に記録していればよい。画像処理装置12に、この記録媒体を供給する。これにより、コンピュータとしての画像処理装置12(またはCPUやMPU)が、供給された記録媒体に記録されているプログラムコードを読み出し、実行すればよい。 This recording medium only needs to record the program code (execution format program, intermediate code program, source program) of the web page of the image processing apparatus 12 which is software for realizing the above-described functions so as to be readable by a computer. This recording medium is supplied to the image processing apparatus 12. Thus, the image processing apparatus 12 (or CPU or MPU) as a computer may read and execute the program code recorded on the supplied recording medium.
 プログラムコードを画像処理装置12に供給する記録媒体は、特定の構造または種類のものに限定されない。すなわちこの記録媒体は、たとえば、磁気テープやカセットテープ等のテープ系、フロッピー(登録商標)ディスク/ハードディスク等の磁気ディスクやCD-ROM/MO/MD/DVD/CD-R等の光ディスクを含むディスク系、ICカード(メモリカードを含む)/光カード等のカード系、あるいはマスクROM/EPROM/EEPROM/フラッシュROM等の半導体メモリ系などとすることができる。 The recording medium that supplies the program code to the image processing apparatus 12 is not limited to a specific structure or type. That is, the recording medium includes, for example, a tape system such as a magnetic tape and a cassette tape, a magnetic disk such as a floppy (registered trademark) disk / hard disk, and an optical disk such as a CD-ROM / MO / MD / DVD / CD-R. System, a card system such as an IC card (including a memory card) / optical card, or a semiconductor memory system such as a mask ROM / EPROM / EEPROM / flash ROM.
 また、画像処理装置12を通信ネットワークと接続可能に構成しても、本発明の目的を達成できる。この場合、上記のプログラムコードを、通信ネットワークを介して画像処理装置12に供給する。この通信ネットワークは画像処理装置12にプログラムコードを供給できるものであればよく、特定の種類または形態に限定されない。たとえばインターネット、イントラネット、エキストラネット、LAN、ISDN、VAN、CATV通信網、仮想専用網(Virtual Private Network)、電話回線網、移動体通信網、衛星通信網等であればよい。 The object of the present invention can be achieved even if the image processing device 12 is configured to be connectable to a communication network. In this case, the program code is supplied to the image processing apparatus 12 via a communication network. The communication network is not limited to a specific type or form as long as it can supply program codes to the image processing apparatus 12. For example, the Internet, intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication network, etc. may be used.
 この通信ネットワークを構成する伝送媒体も、プログラムコードを伝送可能な任意の媒体であればよく、特定の構成または種類のものに限定されない。たとえばIEEE1394、USB(Universal Serial Bus)、電力線搬送、ケーブルTV回線、電話線、ADSL(Asymmetric Digital Subscriber Line)回線等の有線でも、IrDAやリモコンのような赤外線、Bluetooth(登録商標)、802.11無線、HDR、携帯電話網、衛星回線、地上波デジタル網等の無線でも利用可能である。なお本発明は、上記プログラムコードが電子的な伝送で具現化された、搬送波に埋め込まれたコンピュータデータ信号の形態でも実現され得る。 The transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type. For example, even wired such as IEEE 1394, USB (Universal Serial Bus), power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared such as IrDA or remote control, Bluetooth (registered trademark), 802.11 It can also be used by radio such as radio, HDR, mobile phone network, satellite line, terrestrial digital network. The present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
 〔付記事項〕
 本発明は上述した実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能である。すなわち、請求項に示した範囲で適宜変更した技術的手段を組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。
[Additional Notes]
The present invention is not limited to the above-described embodiments, and various modifications can be made within the scope shown in the claims. That is, embodiments obtained by combining technical means appropriately modified within the scope of the claims are also included in the technical scope of the present invention.
 本発明は、例えば、以下のように表現することもできる。 The present invention can also be expressed as follows, for example.
 1.画像検査装置において、撮像画像におけるハレーション領域とその周辺画素の輝度値、面積を抽出する手段と、前記ハレーション領域の形状から実質輝度のピーク位置を算出する手段と、前記周辺画素の位置と前記実質輝度ピーク位置との位置関係と周辺画素輝度値からハレーション領域における画素の実質輝度値を推定する手段を備え、前記ハレーション領域における輝度分布の形状を実質の輝度分布に補正することを特徴とした、画像検査処理装置。 1. In the image inspection apparatus, a means for extracting the luminance value and area of the halation region and its surrounding pixels in the captured image, a means for calculating a peak position of the substantial luminance from the shape of the halation region, a position of the peripheral pixel and the substantial It comprises means for estimating the actual luminance value of the pixel in the halation region from the positional relationship with the luminance peak position and the peripheral pixel luminance value, and the shape of the luminance distribution in the halation region is corrected to the actual luminance distribution, Image inspection processing device.
 2.1に記載の画像検査処理装置において、撮像画像を構成する画素が飽和した状態(ハレーション)にあるかを判定し、連結した飽和画素の数からハレーション領域の面積を算出する手段と、そのハレーション領域からn画素外側にある周辺画素における個々の輝度値とそれら周辺画素の平均輝度値を算出する手段と、前記周辺画素数とハレーション領域における画素数との和から周辺画素までの面積を算出する手段を備えた、画像検査処理装置。 In the image inspection processing apparatus described in 2.1, a unit that determines whether or not the pixels constituting the captured image are in a saturated state (halation), calculates the area of the halation region from the number of connected saturated pixels, and Means for calculating individual luminance values in the peripheral pixels outside the halation region by n pixels and the average luminance value of the peripheral pixels, and calculating the area from the sum of the number of peripheral pixels and the number of pixels in the halation region to the peripheral pixels An image inspection processing apparatus comprising means for performing
 3.1に記載の画像検査処理装置において、ハレーション領域の形状からその領域の重心を算出し、この重心を領域における補正後のピーク輝度値をとるピーク位置とする、画像検査処理装置。 3. The image inspection processing device according to 3.1, wherein the center of gravity of the region is calculated from the shape of the halation region, and the center is set as a peak position where the corrected peak luminance value is obtained in the region.
 4.1に記載の画像検査処理装置において、2に記載の周辺画素における平均輝度値と面積、ハレーション領域の面積を用いてハレーションにより失われた実質輝度値を曲線補間によって推定することを特徴とした、画像検査処理装置。 4.1. The image inspection processing device according to 4.1, wherein the actual luminance value lost by halation is estimated by curve interpolation using the average luminance value and area in the peripheral pixels described in 2, and the area of the halation region. An image inspection processing apparatus.
 本発明の画像検査処理装置は、被検査パネルを撮像装置にて撮像することにより被検査パネルの欠陥検査を行なう画像検査処理装置一般において広く適用できる。 The image inspection processing apparatus of the present invention can be widely applied to general image inspection processing apparatuses that perform defect inspection of a panel to be inspected by imaging the panel to be inspected by an imaging apparatus.
 1   画像検査処理装置
 2   撮像装置
 4   制御装置
 6   表示装置
 8   レンズ
 10  撮像素子
 12  画像処理装置
 14  画像メモリ
 16  披検査物
 18  画像データ保存部
 20  欠陥探索部(算出手段)
 22  ハレーション欠陥画像保存部
 24  ハレーション領域輝度補正処理部(推定手段)
 26  特徴量算出部
 28  欠陥評価部
 30  通常欠陥情報保存部
 32  欠陥情報表示部
 36  欠陥部
 38  欠陥画素
 40  背景画素
 42  ハレーション領域における欠陥がある画素
 44  ハレーションしていない領域における欠陥がある画素
 52  ハレーション境界画素
DESCRIPTION OF SYMBOLS 1 Image inspection processing apparatus 2 Imaging apparatus 4 Control apparatus 6 Display apparatus 8 Lens 10 Image pick-up element 12 Image processing apparatus 14 Image memory 16 Test object 18 Image data storage part 20 Defect search part (calculation means)
22 halation defect image storage unit 24 halation area luminance correction processing unit (estimating means)
26 feature amount calculation unit 28 defect evaluation unit 30 normal defect information storage unit 32 defect information display unit 36 defect unit 38 defective pixel 40 background pixel 42 pixel with defect in halation region 44 pixel with defect in non-halation region 52 halation Border pixel

Claims (9)

  1.  階調表示が可能な被検査物を、当該被検査物における表示面の撮像画像に基づき検査する画像検査処理装置であって、
     上記撮像画像におけるハレーション領域の面積、当該ハレーション領域の周辺画素の輝度、当該周辺画素から内側の画素全てを含んだ領域の面積、当該ハレーション領域内の輝度の推定ピーク位置をそれぞれ算出する算出手段と、
     上記算出された各値に基づく所定の輝度分布モデルを上記ハレーション領域に適用することによって、当該ハレーション領域内の各画素の実質輝度値を推定する推定手段と、を備えていることを特徴とする画像検査処理装置。
    An image inspection processing apparatus that inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object,
    Calculating means for calculating the area of the halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of the region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region; ,
    An estimation means for estimating a substantial luminance value of each pixel in the halation region by applying a predetermined luminance distribution model based on the calculated values to the halation region. Image inspection processing device.
  2.  上記算出手段は、上記撮像画像において輝度値が最大値となっている画素が連続して集まった領域を上記ハレーション領域とし、当該ハレーション領域を構成する画素の数から上記ハレーション領域の面積を算出することを特徴とする請求項1に記載の画像検査処理装置。 The calculation means calculates an area of the halation region from the number of pixels constituting the halation region, with the region where pixels having the maximum luminance value gathered continuously in the captured image as the halation region. The image inspection processing apparatus according to claim 1.
  3.  上記算出手段は、上記周辺画素の平均輝度値を算出し、
     上記推定手段は、上記各値に加えて上記平均輝度値にも基づく上記輝度分布モデルを、上記ハレーション領域に適用することを特徴とする請求項1または2に記載の画像検査処理装置。
    The calculation means calculates an average luminance value of the surrounding pixels,
    The image inspection processing apparatus according to claim 1, wherein the estimation unit applies the luminance distribution model based on the average luminance value in addition to the values to the halation region.
  4.  上記算出手段は、
     上記周辺画素の画素数と上記ハレーション領域における画素数との和を、上記周辺画素から内側の画素全てを含んだ領域の上記面積として算出することを特徴とする請求項1~3のいずれか1項に記載の画像検査処理装置。
    The calculation means is
    The sum of the number of pixels of the peripheral pixels and the number of pixels in the halation region is calculated as the area of the region including all pixels inside the peripheral pixels. The image inspection processing apparatus according to Item.
  5.  上記算出手段は、上記ハレーション領域の形状から当該ハレーション領域の重心を算出し、当該重心を上記推定ピーク位置とすることを特徴とする請求項1~4のいずれか1項に記載の画像検査処理装置。 5. The image inspection process according to claim 1, wherein the calculation unit calculates a center of gravity of the halation region from the shape of the halation region, and uses the center of gravity as the estimated peak position. apparatus.
  6.  上記輝度分布モデルは曲線グラフであり、
     上記推定手段は、曲線補間によって、上記ハレーション領域の輝度値を推定することを特徴とする請求項1~5の何れか1項に記載の画像検査処理装置。
    The luminance distribution model is a curve graph,
    6. The image inspection processing apparatus according to claim 1, wherein the estimation unit estimates a luminance value of the halation region by curve interpolation.
  7.  階調表示が可能な被検査物を、当該被検査物における表示面の撮像画像に基づき検査する画像検査処理装置が実行する画像検査処理方法であって、
     上記撮像画像におけるハレーション領域の面積、当該ハレーション領域の周辺画素の輝度、当該周辺画素から内側の画素全てを含んだ領域の面積、当該ハレーション領域内の輝度の推定ピーク位置をそれぞれ算出する算出ステップと、
     上記算出された各値に基づく所定の輝度分布モデルを上記ハレーション領域に適用することによって、当該ハレーション領域内の各画素の実質輝度値を推定する推定ステップと、を備えていることを特徴とする画像検査処理方法。
    An image inspection processing method executed by an image inspection processing apparatus that inspects an inspection object capable of gradation display based on a captured image of a display surface of the inspection object,
    A calculation step for calculating the area of the halation region in the captured image, the luminance of the peripheral pixels of the halation region, the area of the region including all pixels inside the peripheral pixels, and the estimated peak position of the luminance in the halation region; ,
    An estimation step of estimating a substantial luminance value of each pixel in the halation region by applying a predetermined luminance distribution model based on the calculated values to the halation region. Image inspection processing method.
  8.  請求項1から6のいずれか1項に記載の画像検査処理装置を動作させるプログラムであって、コンピュータを上記の各手段として機能させるためのプログラム。 A program for operating the image inspection processing apparatus according to any one of claims 1 to 6, wherein the program causes a computer to function as each of the above means.
  9.  請求項8に記載のプログラムを記録しているコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium in which the program according to claim 8 is recorded.
PCT/JP2009/062148 2008-07-02 2009-07-02 Image inspection processing device, image inspection processing method, program, and recording medium WO2010001973A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008-173958 2008-07-02
JP2008173958A JP4416825B2 (en) 2008-07-02 2008-07-02 Image inspection processing apparatus, image inspection processing method, program, and recording medium

Publications (1)

Publication Number Publication Date
WO2010001973A1 true WO2010001973A1 (en) 2010-01-07

Family

ID=41466063

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/062148 WO2010001973A1 (en) 2008-07-02 2009-07-02 Image inspection processing device, image inspection processing method, program, and recording medium

Country Status (2)

Country Link
JP (1) JP4416825B2 (en)
WO (1) WO2010001973A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014091243A1 (en) * 2012-12-14 2014-06-19 Micromass Uk Limited Correction of time of flight ms adc data on push by push basis
CN110024020A (en) * 2016-11-23 2019-07-16 三星电子株式会社 Display device, calibrating installation and its calibration method
CN111951322A (en) * 2020-07-16 2020-11-17 昆山丘钛光电科技有限公司 Camera module quality detection method and device and computer storage medium
CN112562507A (en) * 2020-12-03 2021-03-26 Tcl华星光电技术有限公司 Display panel and detection method thereof

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI626620B (en) * 2016-12-20 2018-06-11 廣東歐珀移動通訊有限公司 Image processing method and device, electronic device, and computer readable storage medium
JP7173763B2 (en) * 2018-06-20 2022-11-16 株式会社日本マイクロニクス Image generation device and image generation method
CN116453438B (en) * 2023-06-19 2023-08-18 深圳市瑞桔电子有限公司 Display screen parameter detection method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003098111A (en) * 2000-09-21 2003-04-03 Hitachi Ltd Method for inspecting defect and apparatus therefor
JP2007024737A (en) * 2005-07-20 2007-02-01 Hitachi High-Technologies Corp Semiconductor defect inspection device and method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003098111A (en) * 2000-09-21 2003-04-03 Hitachi Ltd Method for inspecting defect and apparatus therefor
JP2007024737A (en) * 2005-07-20 2007-02-01 Hitachi High-Technologies Corp Semiconductor defect inspection device and method thereof

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014091243A1 (en) * 2012-12-14 2014-06-19 Micromass Uk Limited Correction of time of flight ms adc data on push by push basis
US9383338B2 (en) 2012-12-14 2016-07-05 Micromass Uk Limited Correction of time of flight MS ADC data on push by push basis
CN110024020A (en) * 2016-11-23 2019-07-16 三星电子株式会社 Display device, calibrating installation and its calibration method
CN111951322A (en) * 2020-07-16 2020-11-17 昆山丘钛光电科技有限公司 Camera module quality detection method and device and computer storage medium
CN112562507A (en) * 2020-12-03 2021-03-26 Tcl华星光电技术有限公司 Display panel and detection method thereof
CN112562507B (en) * 2020-12-03 2022-07-12 Tcl华星光电技术有限公司 Display panel and detection method thereof

Also Published As

Publication number Publication date
JP2010014503A (en) 2010-01-21
JP4416825B2 (en) 2010-02-17

Similar Documents

Publication Publication Date Title
WO2010001973A1 (en) Image inspection processing device, image inspection processing method, program, and recording medium
JP5510928B2 (en) Method, system, and program for generating a luminescent image of an integrated circuit
US20060067569A1 (en) Image inspection device, image inspection method, and image inspection program
JP4466015B2 (en) Image processing apparatus and image processing program
KR20090101356A (en) Defect detecting device, and defect detecting method
JP2010066102A (en) Evaluation device, calibration method, calibration program and recording medium
JP5242248B2 (en) Defect detection apparatus, defect detection method, defect detection program, and recording medium
KR100837459B1 (en) Image pickup method of display panel and image pickup apparatus of display panel
JP4610656B2 (en) Inspection device, inspection method, program, and recording medium
JPWO2014013792A1 (en) Noise evaluation method, image processing apparatus, imaging apparatus, and program
JPWO2006073155A1 (en) Apparatus for pattern defect inspection, method thereof, and computer-readable recording medium recording the program thereof
JP4438363B2 (en) Image processing apparatus and image processing program
CN113012121B (en) Method and device for processing bare chip scanning result, electronic equipment and storage medium
JP2011029710A (en) Image processor, image processing program, and imaging apparatus
JPWO2018198916A1 (en) Image processing apparatus, image processing method, and storage medium
JP4466016B2 (en) Image processing apparatus and image processing program
JP2021148720A (en) Inspection system, learning device, learning program, learning method, inspection device, inspection program, and inspection method
JP2004134861A (en) Resolution evaluation method, resolution evaluation program, and optical apparatus
CN111415611A (en) Brightness compensation method, brightness compensation device and display device
JP2004354064A (en) Inspection method and apparatus of defect in magnetic head by optical system measurement image
KR101076478B1 (en) Method of detecting the picture defect of flat display panel using stretching technique and recording medium
WO2021192627A1 (en) Inspection system, learning device, learning program, learning method, inspection device, inspection program, and inspection method
KR20120138376A (en) Automatic high-speed ball detection using a multi-exposure image
JP4121628B2 (en) Screen inspection method and apparatus
TWI592544B (en) Pavement marking degradation judgment method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09773551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09773551

Country of ref document: EP

Kind code of ref document: A1