CN117495722B - Image processing method for nanoimprint lithography - Google Patents

Image processing method for nanoimprint lithography Download PDF

Info

Publication number
CN117495722B
CN117495722B CN202311785282.6A CN202311785282A CN117495722B CN 117495722 B CN117495722 B CN 117495722B CN 202311785282 A CN202311785282 A CN 202311785282A CN 117495722 B CN117495722 B CN 117495722B
Authority
CN
China
Prior art keywords
pixel
pixel point
image
fuzzy
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311785282.6A
Other languages
Chinese (zh)
Other versions
CN117495722A (en
Inventor
冀然
房臣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Germanlitho Co ltd
Original Assignee
Germanlitho Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Germanlitho Co ltd filed Critical Germanlitho Co ltd
Priority to CN202311785282.6A priority Critical patent/CN117495722B/en
Publication of CN117495722A publication Critical patent/CN117495722A/en
Application granted granted Critical
Publication of CN117495722B publication Critical patent/CN117495722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03FPHOTOMECHANICAL PRODUCTION OF TEXTURED OR PATTERNED SURFACES, e.g. FOR PRINTING, FOR PROCESSING OF SEMICONDUCTOR DEVICES; MATERIALS THEREFOR; ORIGINALS THEREFOR; APPARATUS SPECIALLY ADAPTED THEREFOR
    • G03F7/00Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printing surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
    • G03F7/0002Lithographic processes using patterning methods other than those involving the exposure to radiation, e.g. by stamping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method of nanoimprint lithography, which belongs to the technical field of image processing and comprises the following steps: s1, acquiring a working image of a substrate to be etched in the working process of imprint lithography, preprocessing the working image of the substrate to be etched, and generating an image to be processed of the substrate to be etched; s2, determining a fuzzy pixel point set of an image to be processed of the carved substrate; s3, performing brightness processing on the fuzzy pixel point set to finish image processing. According to the invention, the working image of the etched substrate is subjected to pixel point screening, the pixel points with abnormal pixel values, namely the fuzzy pixel point set, are determined, and the elements of the fuzzy pixel point set are subjected to brightness processing, so that the definition of the image is improved, a user can find the abnormality in time when observing the substrate image, and the nanoimprint lithography technology is improved.

Description

Image processing method for nanoimprint lithography
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image processing method for nanoimprint lithography.
Background
Imprint lithography is an important process for microelectronic fabrication and has found wide application in the fields of integrated circuit fabrication, optical device fabrication, and the like. The nano-imprinting technology is used as a novel pattern transfer technology, compared with the traditional photoetching technology, the nano-imprinting technology can be used for manufacturing small linewidth patterns, has extremely low manufacturing cost compared with the high-precision electron beam exposure technology, and is suitable for large-scale industrial production. With the development of nanoimprint technology, the production quality of nanoimprint needs to be controlled, and an imprint lithography substrate needs to be monitored, wherein the monitoring is based on the acquisition of a substrate image, so that the quality of the substrate image needs to be improved.
Disclosure of Invention
The invention provides an image processing method for nanoimprint lithography in order to solve the problems.
The technical scheme of the invention is as follows: an image processing method of nanoimprint lithography includes the steps of:
s1, acquiring a working image of a substrate to be etched in the working process of imprint lithography, preprocessing the working image of the substrate to be etched, and generating an image to be processed of the substrate to be etched;
s2, determining a fuzzy pixel point set of an image to be processed of the carved substrate;
s3, performing brightness processing on the fuzzy pixel point set to finish image processing.
Further, in S1, the specific method for preprocessing the working image of the etched substrate is as follows: and denoising and clipping the working image in sequence.
Further, S2 comprises the following sub-steps:
s21, obtaining pixel values of all pixel points in the image to be processed;
s22, calculating pixel gradient change line values of each line of the image to be processed according to the pixel values of each pixel point, and generating a pixel gradient change line sequence;
s23, clustering the pixel gradient change line sequence to obtain contour coefficients of each class;
s24, taking the pixel point with the maximum pixel value in the row with the maximum pixel gradient change row value as the interest pixel point;
s25, calculating fuzzy similarity between other pixel points and interest pixel points in the image to be processed according to the contour coefficient of each class;
s26, taking all the pixel points with the fuzzy similarity smaller than the fuzzy similarity threshold value as a fuzzy pixel point set.
The beneficial effects of the above-mentioned further scheme are: in the invention, the pixel values of adjacent pixel points in each row of the image to be processed are subjected to logarithmic operation, and the pixel gradient change row value of each row is determined by combining parameters such as the pixel difference value of the last pixel point and the first pixel point in each row, so that a pixel gradient change sequence containing a plurality of row values can be obtained. Because the number of lines of the image to be processed is more, the clustering algorithm (for example, K-means clustering) is adopted to perform clustering processing on the pixel gradient change sequence, so that the complexity of a sequence data set can be reduced, fuzzy similarity calculation with the interest pixel point is facilitated, and the condition that the pixel point with larger similarity difference with the interest pixel point possibly has brightness ambiguity exists. The fuzzy similarity threshold is generally 0.5, and can be manually determined according to actual conditions.
Further, in S22, the pixel gradient change line value H of the ith line of the image to be processed i The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is i,j Representing pixel values of pixel points in ith row and jth column in the image to be processed, wherein I represents the number of rows of pixel points in the image to be processed, J represents the number of columns of pixel points in the image to be processed, and G i,j+1 Representing pixel values of (j+1) -th column pixel points of ith row and G in image to be processed i,J Representing pixel values of pixel points in ith row and jth column in image to be processed, G i,1 The pixel value of the pixel point of the ith row and the 1 st column in the image to be processed is represented, and ln (·) represents logarithmic operation.
Further, in S25, the ith row and jth column of pixels in the image to be processed and the interest imageFuzzy similarity V between pixels i,j The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is i,j Representing pixel values of pixel points in ith row and jth column in image to be processed, G 0 Pixel value, H, representing pixel of interest i A pixel gradient change line value H representing the ith line of the image to be processed 0 Representing pixel gradient change row value of row where interesting pixel points are located in image to be processed, B k The profile coefficient corresponding to the kth class is represented, K represents the number of classes in the clustering process, and c represents a constant.
Further, S3 includes the following steps:
s31, constructing a rectangular coordinate system by taking a fuzzy pixel point with the maximum brightness value in the fuzzy pixel point set as an origin, and determining the position coordinates of the rest fuzzy pixel points in the fuzzy pixel point set;
s32, determining a first edge fuzzy pixel point and a second edge fuzzy pixel point in the fuzzy pixel point set according to the position coordinates of the rest fuzzy pixel points;
s33, determining a brightness adjustment threshold according to the brightness value of the first edge blurring pixel point and the brightness value of the second edge blurring pixel point;
s34, according to the brightness adjustment threshold value, brightness processing is carried out on all pixels of the fuzzy pixel point set.
The beneficial effects of the above-mentioned further scheme are: in the invention, a rectangular coordinate system constructed by taking the pixel point with the maximum brightness value as the origin can be used for determining two edge fuzzy pixel points closest and farthest to the origin, and the two edge fuzzy pixel points can be used as two characteristic pixel points to participate in the calculation of a brightness adjustment threshold value (the pixel point farthest/nearest to the pixel point with the maximum brightness value possibly has larger brightness change), so that the determined brightness adjustment threshold value is used for changing the brightness value of a fuzzy pixel point set.
Further, in S32, the specific method for determining the first edge blurred pixel point and the second edge blurred pixel point is as follows: and calculating the linear distance between each other fuzzy pixel point and the original point, taking the fuzzy pixel point with the farthest linear distance from the original point as a first edge fuzzy pixel point, and taking the fuzzy pixel point with the nearest linear distance from the original point as a second edge fuzzy pixel point.
Further, in S33, the calculation formula of the brightness adjustment threshold E is:the method comprises the steps of carrying out a first treatment on the surface of the Wherein u is 1 An abscissa, v, representing a first edge-blurred pixel point 1 Representing the ordinate of the first edge blurred pixel point, u 2 An abscissa, v, representing the second edge-blurred pixel point 2 Representing the ordinate of the second edge blurred pixel point, F 1 Representing the luminance value of the first edge blurred pixel point, F 2 And represents the luminance value of the second edge blurred pixel point.
Further, in S34, the specific method for performing the brightness processing is: and taking the brightness adjustment threshold value as the latest brightness value of the pixel point where the rectangular coordinate system origin is located, and taking the product of the brightness value of each other blurred pixel point in the blurred pixel point set and the brightness adjustment threshold value as the latest brightness value of each other blurred pixel point.
The beneficial effects of the invention are as follows: according to the invention, the working image of the etched substrate is subjected to pixel point screening, the pixel points with abnormal pixel values, namely the fuzzy pixel point set, are determined, and the elements of the fuzzy pixel point set are subjected to brightness processing, so that the definition of the image is improved, a user can find the abnormality in time when observing the substrate image, and the nanoimprint lithography technology is improved.
Drawings
Fig. 1 is a flow chart of an image processing method of nanoimprint lithography.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides an image processing method of nanoimprint lithography, comprising the steps of:
s1, acquiring a working image of a substrate to be etched in the working process of imprint lithography, preprocessing the working image of the substrate to be etched, and generating an image to be processed of the substrate to be etched;
s2, determining a fuzzy pixel point set of an image to be processed of the carved substrate;
s3, performing brightness processing on the fuzzy pixel point set to finish image processing.
In the embodiment of the invention, in S1, the specific method for preprocessing the work image of the carved substrate is as follows: and denoising and clipping the working image in sequence.
In an embodiment of the present invention, S2 comprises the following sub-steps:
s21, obtaining pixel values of all pixel points in the image to be processed;
s22, calculating pixel gradient change line values of each line of the image to be processed according to the pixel values of each pixel point, and generating a pixel gradient change line sequence;
s23, clustering the pixel gradient change line sequence to obtain contour coefficients of each class;
s24, taking the pixel point with the maximum pixel value in the row with the maximum pixel gradient change row value as the interest pixel point;
s25, calculating fuzzy similarity between other pixel points and interest pixel points in the image to be processed according to the contour coefficient of each class;
s26, taking all the pixel points with the fuzzy similarity smaller than the fuzzy similarity threshold value as a fuzzy pixel point set.
In the invention, the pixel values of adjacent pixel points in each row of the image to be processed are subjected to logarithmic operation, and the pixel gradient change row value of each row is determined by combining parameters such as the pixel difference value of the last pixel point and the first pixel point in each row, so that a pixel gradient change sequence containing a plurality of row values can be obtained. Because the number of lines of the image to be processed is more, the clustering algorithm (for example, K-means clustering) is adopted to perform clustering processing on the pixel gradient change sequence, so that the complexity of a sequence data set can be reduced, fuzzy similarity calculation with the interest pixel point is facilitated, and the condition that the pixel point with larger similarity difference with the interest pixel point possibly has brightness ambiguity exists. The fuzzy similarity threshold is generally 0.5, and can be manually determined according to actual conditions.
In the embodiment of the present invention, in S22, the pixel gradient of the i-th line of the image to be processed changes the line value H i The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is i,j Representing pixel values of pixel points in ith row and jth column in the image to be processed, wherein I represents the number of rows of pixel points in the image to be processed, J represents the number of columns of pixel points in the image to be processed, and G i,j+1 Representing pixel values of (j+1) -th column pixel points of ith row and G in image to be processed i,J Representing pixel values of pixel points in ith row and jth column in image to be processed, G i,1 The pixel value of the pixel point of the ith row and the 1 st column in the image to be processed is represented, and ln (·) represents logarithmic operation.
In the embodiment of the present invention, in S25, the fuzzy similarity V between the ith row and jth column pixel points and the interest pixel points in the image to be processed i,j The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is i,j Representing pixel values of pixel points in ith row and jth column in image to be processed, G 0 Pixel value, H, representing pixel of interest i A pixel gradient change line value H representing the ith line of the image to be processed 0 Representing pixel gradient change row value of row where interesting pixel points are located in image to be processed, B k The profile coefficient corresponding to the kth class is represented, K represents the number of classes in the clustering process, and c represents a constant.
In the embodiment of the present invention, S3 includes the following steps:
s31, constructing a rectangular coordinate system by taking a fuzzy pixel point with the maximum brightness value in the fuzzy pixel point set as an origin, and determining the position coordinates of the rest fuzzy pixel points in the fuzzy pixel point set;
s32, determining a first edge fuzzy pixel point and a second edge fuzzy pixel point in the fuzzy pixel point set according to the position coordinates of the rest fuzzy pixel points;
s33, determining a brightness adjustment threshold according to the brightness value of the first edge blurring pixel point and the brightness value of the second edge blurring pixel point;
s34, according to the brightness adjustment threshold value, brightness processing is carried out on all pixels of the fuzzy pixel point set.
In the invention, a rectangular coordinate system constructed by taking the pixel point with the maximum brightness value as the origin can be used for determining two edge fuzzy pixel points closest and farthest to the origin, and the two edge fuzzy pixel points can be used as two characteristic pixel points to participate in the calculation of a brightness adjustment threshold value (the pixel point farthest/nearest to the pixel point with the maximum brightness value possibly has larger brightness change), so that the determined brightness adjustment threshold value is used for changing the brightness value of a fuzzy pixel point set.
In the embodiment of the present invention, in S32, a specific method for determining the first edge blurred pixel point and the second edge blurred pixel point is as follows: and calculating the linear distance between each other fuzzy pixel point and the original point, taking the fuzzy pixel point with the farthest linear distance from the original point as a first edge fuzzy pixel point, and taking the fuzzy pixel point with the nearest linear distance from the original point as a second edge fuzzy pixel point.
In the embodiment of the present invention, in S33, the calculation formula of the brightness adjustment threshold E is:the method comprises the steps of carrying out a first treatment on the surface of the Wherein u is 1 An abscissa, v, representing a first edge-blurred pixel point 1 Representing the ordinate of the first edge blurred pixel point, u 2 An abscissa, v, representing the second edge-blurred pixel point 2 Representing the ordinate of the second edge blurred pixel point, F 1 Representing the luminance value of the first edge blurred pixel point, F 2 And represents the luminance value of the second edge blurred pixel point.
In the embodiment of the present invention, in S34, the specific method for performing brightness processing is as follows: and taking the brightness adjustment threshold value as the latest brightness value of the pixel point where the rectangular coordinate system origin is located, and taking the product of the brightness value of each other blurred pixel point in the blurred pixel point set and the brightness adjustment threshold value as the latest brightness value of each other blurred pixel point.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (1)

1. An image processing method of nanoimprint lithography, comprising the steps of:
s1, acquiring a working image of a substrate to be etched in the working process of imprint lithography, preprocessing the working image of the substrate to be etched, and generating an image to be processed of the substrate to be etched;
s2, determining a fuzzy pixel point set of an image to be processed of the carved substrate;
s3, performing brightness processing on the fuzzy pixel point set to finish image processing;
in the step S1, the specific method for preprocessing the work image of the carved substrate comprises the following steps: sequentially carrying out denoising treatment and cutting treatment on the working image;
the step S2 comprises the following substeps:
s21, obtaining pixel values of all pixel points in the image to be processed;
s22, calculating pixel gradient change line values of each line of the image to be processed according to the pixel values of each pixel point, and generating a pixel gradient change line sequence;
s23, clustering the pixel gradient change line sequence to obtain contour coefficients of each class;
s24, taking the pixel point with the maximum pixel value in the row with the maximum pixel gradient change row value as the interest pixel point;
s25, calculating fuzzy similarity between other pixel points and interest pixel points in the image to be processed according to the contour coefficient of each class;
s26, taking all pixel points with the fuzzy similarity smaller than a fuzzy similarity threshold value as a fuzzy pixel point set;
in S22, the pixel gradient change line value H of the ith line of the image to be processed i The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is i,j Representing pixel values of pixel points in ith row and jth column in an image to be processed, wherein J represents the column number of the pixel points of the image to be processed and G i,j+1 Representing pixel values of (j+1) -th column pixel points of ith row and G in image to be processed i,J Representing pixel values of pixel points in ith row and jth column in image to be processed, G i,1 Representing pixel values of pixel points in the ith row and the 1 st column in the image to be processed, wherein ln (·) represents logarithmic operation;
in S25, the fuzzy similarity V between the ith row, jth column and interest pixel in the image to be processed i,j The calculation formula of (2) is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein G is i,j Representing pixel values of pixel points in ith row and jth column in image to be processed, G 0 Pixel value, H, representing pixel of interest i A pixel gradient change line value H representing the ith line of the image to be processed 0 Representing pixel gradient change row value of row where interesting pixel points are located in image to be processed, B k Representing contour coefficients corresponding to the kth class, K representing the number of classes of clustering processing, and c representing a constant;
the step S3 comprises the following steps:
s31, constructing a rectangular coordinate system by taking a fuzzy pixel point with the maximum brightness value in the fuzzy pixel point set as an origin, and determining the position coordinates of the rest fuzzy pixel points in the fuzzy pixel point set;
s32, determining a first edge fuzzy pixel point and a second edge fuzzy pixel point in the fuzzy pixel point set according to the position coordinates of the rest fuzzy pixel points;
s33, determining a brightness adjustment threshold according to the brightness value of the first edge blurring pixel point and the brightness value of the second edge blurring pixel point;
s34, performing brightness processing on all pixels of the fuzzy pixel point set according to a brightness adjustment threshold value;
in the step S32, the specific method for determining the first edge blurred pixel point and the second edge blurred pixel point includes: calculating the linear distance between each other fuzzy pixel point and the original point, taking the fuzzy pixel point with the farthest linear distance from the original point as a first edge fuzzy pixel point, and taking the fuzzy pixel point with the nearest linear distance from the original point as a second edge fuzzy pixel point;
in S33, the calculation formula of the brightness adjustment threshold E is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein u is 1 An abscissa, v, representing a first edge-blurred pixel point 1 Representing the ordinate of the first edge blurred pixel point, u 2 An abscissa, v, representing the second edge-blurred pixel point 2 Representing the ordinate of the second edge blurred pixel point, F 1 Representing the luminance value of the first edge blurred pixel point, F 2 A luminance value representing a second edge blurred pixel point;
in S34, the specific method for performing the brightness processing is as follows: and taking the brightness adjustment threshold value as the latest brightness value of the pixel point where the rectangular coordinate system origin is located, and taking the product of the brightness value of each other blurred pixel point in the blurred pixel point set and the brightness adjustment threshold value as the latest brightness value of each other blurred pixel point.
CN202311785282.6A 2023-12-25 2023-12-25 Image processing method for nanoimprint lithography Active CN117495722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311785282.6A CN117495722B (en) 2023-12-25 2023-12-25 Image processing method for nanoimprint lithography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311785282.6A CN117495722B (en) 2023-12-25 2023-12-25 Image processing method for nanoimprint lithography

Publications (2)

Publication Number Publication Date
CN117495722A CN117495722A (en) 2024-02-02
CN117495722B true CN117495722B (en) 2024-03-29

Family

ID=89683213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311785282.6A Active CN117495722B (en) 2023-12-25 2023-12-25 Image processing method for nanoimprint lithography

Country Status (1)

Country Link
CN (1) CN117495722B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101193212A (en) * 2006-11-29 2008-06-04 明基电通股份有限公司 Method and device for processing backlight image
TW200828977A (en) * 2006-12-28 2008-07-01 Altek Corp Brightness adjusting method
CN102693535A (en) * 2011-03-24 2012-09-26 深圳市蓝韵实业有限公司 Method for detecting light bundling device area in DR image
CN111127337A (en) * 2019-11-28 2020-05-08 稿定(厦门)科技有限公司 Image local area highlight adjusting method, medium, equipment and device
CN113689428A (en) * 2021-10-25 2021-11-23 江苏南通元辰钢结构制造有限公司 Mechanical part stress corrosion detection method and system based on image processing
CN115147416A (en) * 2022-09-02 2022-10-04 山东大山不锈钢制品有限公司 Rope disorder detection method and device for rope rewinder and computer equipment
CN115775215A (en) * 2022-12-07 2023-03-10 百度时代网络技术(北京)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN116958503A (en) * 2023-09-19 2023-10-27 广东新泰隆环保集团有限公司 Image processing-based sludge drying grade identification method and system
CN117094909A (en) * 2023-08-31 2023-11-21 青岛天仁微纳科技有限责任公司 Nanometer stamping wafer image acquisition processing method
CN117252870A (en) * 2023-11-15 2023-12-19 青岛天仁微纳科技有限责任公司 Image processing method of nano-imprint mold

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5847471B2 (en) * 2011-07-20 2016-01-20 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, and image processing program
WO2013031807A1 (en) * 2011-09-02 2013-03-07 シャープ株式会社 Three-dimensional image generation method, three-dimensional image generation device, and display device comprising same
JP6548907B2 (en) * 2015-02-24 2019-07-24 三星ディスプレイ株式會社Samsung Display Co.,Ltd. IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101193212A (en) * 2006-11-29 2008-06-04 明基电通股份有限公司 Method and device for processing backlight image
TW200828977A (en) * 2006-12-28 2008-07-01 Altek Corp Brightness adjusting method
CN102693535A (en) * 2011-03-24 2012-09-26 深圳市蓝韵实业有限公司 Method for detecting light bundling device area in DR image
CN111127337A (en) * 2019-11-28 2020-05-08 稿定(厦门)科技有限公司 Image local area highlight adjusting method, medium, equipment and device
CN113689428A (en) * 2021-10-25 2021-11-23 江苏南通元辰钢结构制造有限公司 Mechanical part stress corrosion detection method and system based on image processing
CN115147416A (en) * 2022-09-02 2022-10-04 山东大山不锈钢制品有限公司 Rope disorder detection method and device for rope rewinder and computer equipment
CN115775215A (en) * 2022-12-07 2023-03-10 百度时代网络技术(北京)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN117094909A (en) * 2023-08-31 2023-11-21 青岛天仁微纳科技有限责任公司 Nanometer stamping wafer image acquisition processing method
CN116958503A (en) * 2023-09-19 2023-10-27 广东新泰隆环保集团有限公司 Image processing-based sludge drying grade identification method and system
CN117252870A (en) * 2023-11-15 2023-12-19 青岛天仁微纳科技有限责任公司 Image processing method of nano-imprint mold

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A fast local gradient based super-resolution image reconstruction algorithm with fuzzy hyper-bias learning and sparse monitoring paradigm;Soumya Goswami 等;2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS);20150903;399-404 *
A Novel Low-Illumination Image Enhancement Method Based on Dual-Channel Prior;Xinyu Zhao 等;2020 Chinese Automation Congress (CAC);20210129;4244-4248 *
A two-fold fusion fuzzy framework to restore non-uniform illuminated blurred image;Jyotismita Chaki;Optik;20200331;1-21 *
一种模糊核聚类的线性滤波多光谱图像增强算法;刘雅莉 等;计算机应用研究;20150531;第32卷(第5期);1536-1539 *
基于模糊相似度融合的图像复原算法;刘卫华 等;计算机辅助设计与图形学学报;20130531;第25卷(第5期);616-621 *
基于模糊聚类算法的边缘图像增强技术;王惠平 等;现代电子技术;20171215;第40卷(第24期);103-105 *

Also Published As

Publication number Publication date
CN117495722A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN108171688B (en) Wafer surface defect detection method based on Gabor characteristics and random dimensionality reduction
CN110674704A (en) Crowd density estimation method and device based on multi-scale expansion convolutional network
CN114511516B (en) Micro LED defect detection method based on unsupervised learning
CN111178261B (en) Face detection acceleration method based on video coding technology
CN116993718B (en) TFT array substrate defect detection method based on machine vision
CN114723708A (en) Handicraft appearance defect detection method based on unsupervised image segmentation
CN114049267A (en) Improved neighborhood search based statistical and bilateral filtering point cloud denoising method
US11144702B2 (en) Methods and systems for wafer image generation
CN117094909A (en) Nanometer stamping wafer image acquisition processing method
CN117495722B (en) Image processing method for nanoimprint lithography
CN117252870B (en) Image processing method of nano-imprint mold
CN117422717B (en) Intelligent mask stain positioning method and system
CN114219762A (en) Defect detection method based on image restoration
CN116778235A (en) Wafer surface defect classification method based on deep learning network
CN112561949B (en) Rapid moving object detection algorithm based on RPCA and support vector machine
CN115587991A (en) Curve mask extraction method, curve mask extraction device and storage medium
Ivanovska et al. Tomatodiff: On–plant tomato segmentation with denoising diffusion models
CN112764316A (en) Control equipment and control method of stepping exposure machine
CN116309758B (en) OpenCV-based line laser image automatic alignment method and terminal equipment
CN111896038B (en) Semiconductor process data correction method based on correlation entropy and shallow neural network
CN114488719B (en) OPC method based on three-dimensional feature reinforcement
CN117576568B (en) Depth robust non-negative matrix factorization method based on incremental learning
CN113409357B (en) Correlated filtering target tracking method based on double space-time constraints
CN111986153B (en) Digital image correlation algorithm stability test method
CN113792506B (en) MOCVD intracavity state identification method based on image processing and machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant