CN111829458B - Gamma nonlinear error correction method based on deep learning - Google Patents

Gamma nonlinear error correction method based on deep learning Download PDF

Info

Publication number
CN111829458B
CN111829458B CN202010695724.8A CN202010695724A CN111829458B CN 111829458 B CN111829458 B CN 111829458B CN 202010695724 A CN202010695724 A CN 202010695724A CN 111829458 B CN111829458 B CN 111829458B
Authority
CN
China
Prior art keywords
layer
model
phase
path
convolution layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010695724.8A
Other languages
Chinese (zh)
Other versions
CN111829458A (en
Inventor
张晓磊
左超
沈德同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Guangyu Vision Technology Co.,Ltd.
Original Assignee
Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd filed Critical Nanjing University Of Technology Intelligent Computing Imaging Research Institute Co ltd
Priority to CN202010695724.8A priority Critical patent/CN111829458B/en
Publication of CN111829458A publication Critical patent/CN111829458A/en
Application granted granted Critical
Publication of CN111829458B publication Critical patent/CN111829458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a gamma nonlinear error correction method based on deep learning, which comprises the following steps: establishing a model based on a convolutional neural network; after training, obtaining a numerator denominator term of the calculated phase; and bringing the two terms into an arctangent function to calculate the phase of the object. Compared with a multi-step phase shift method, the method greatly reduces the number of the acquired images, reduces the time consumption of the acquired images and reduces the calculation amount; compared with mathematical transformation modes such as Fourier transformation and the like, the method has the advantages of no large and complex operation, low calculation cost and high speed; compared with the method for calibrating the gamma value, the method does not need complex operations such as calibration and the like.

Description

Gamma nonlinear error correction method based on deep learning
Technical Field
The invention relates to the technical field of optical measurement, in particular to a gamma nonlinear error correction method based on deep learning.
Background
Fringe profilometry is widely applied to the fields of 3D modeling, engineering practice, science and education and civilization and the like as a main non-contact optical measurement method, and the most important method for obtaining the phase in fringe profilometry is a phase shift method, wherein the most important method is an N-step phase shift method (documents of ' automatic phase-measuring profiling of 3D differential objects ', authors ' Srinivasan V and the like). Fringe profiling based on phase shifting methods has major sources of error in object phase measurement: phase shift error, gamma non-linearity error of the projector, light source stability, vibration error, and quantization error. Because the stripes are generated by software, with the development of digital grating display technology, a commercial digital grating projector (DLP) can eliminate phase shift errors; the stability of the light source and the vibration error can be solved by reinforcing the measuring system; the main factor affecting the measurement accuracy at this time is the gamma non-linearity error of the projector.
In the past decades, a large number of scholars have done a lot of work on correcting gamma nonlinearity errors of projectors, and a large number of different methods have been proposed, which are mainly divided into two directions of calibrating gamma values through mathematical transformation. Since gamma nonlinear errors appear as higher harmonics, Fourier transform (the document "a fast and actual gamma correction based on Fourier transform analysis for digital front projection profile", author Ma S, etc.), hilbert transform, wavelet transform, etc. are mainly used as mathematical transform means, but they have problems of complicated mathematical computation and large computation amount. The method for calibrating the gamma value (the document "Phase error compensation for a 3-d shape measurement system based on the Phase-shifting method", author s. zhang, etc.) is not very computationally intensive, but the calibration operation for the gamma value is sometimes complicated. In addition, there are high step number phase shift methods such as 8-step, 12-step and even 20-step phase shift methods, which require a lot of drawings, are long in time and are computationally expensive. Therefore, a method for correcting gamma with small calculation amount, high speed and simple operation is not available at present.
Disclosure of Invention
The invention aims to provide a gamma correction method which is small in calculation amount, high in speed and simple and convenient to operate, and particularly relates to a gamma nonlinear error correction method based on deep learning.
The technical scheme of the invention is as follows: a gamma nonlinear error correction method based on deep learning comprises the following steps:
establishing a model based on a convolutional neural network, wherein three high-frequency three-step phase shift fringe images are input into the model, and a numerator item for calculating a phase is output;
generating training data, training a model, and obtaining a numerator and denominator item of a required calculation phase after training is finished;
and step three, substituting the numerator and denominator terms into an arctangent function to calculate the phase of the object.
Preferably, the processing procedure of the three images in the model is as follows: the method comprises the following steps of dividing four paths into four paths and simultaneously carrying out the four paths, wherein the first path sequentially passes through a convolution layer I, a pooling layer I, a residual block I and a convolution layer II; the second path sequentially passes through a convolution layer III, a pooling layer II, a residual block II, an upper sampling layer I and a convolution layer IV; the third path sequentially passes through a convolution layer five, a pooling layer three, a residual block three, an upper sampling layer two, an upper sampling layer three and a convolution layer six; the fourth path sequentially passes through a convolution layer seven, a pooling layer four, a residual block four, an upper sampling layer five, an upper sampling layer six and a convolution layer eight; finally, collecting the four paths of results through the first connecting layer, and obtaining a result with the output channel number of 2 through the ninth convolution layer; wherein the first channel is a numerator term and the second channel is a denominator term.
Preferably, each path of processing adopts multi-scale down sampling, wherein the first path is down sampled by 1 time, the second path is down sampled by 1/2, the third path is down sampled by 1/4, and the fourth path is down sampled by 1/8.
Preferably, none of the convolution layers used in the model are set to any activation function, and the remaining convolution layers use a linear rectification function as the activation function.
Preferably, in the second step, training data are acquired by using an N-step phase shift method, each fringe pattern in the N-step phase shift is used as an input image of the model, and a numerator of a phase calculated by the N-step phase shift method is used as a standard value for model training.
Preferably, in step three, the arctangent function is,
Figure DEST_PATH_IMAGE001
whereinφIn order to be the phase position,(x,y)are the image coordinates.
Compared with the traditional method, the method has the following advantages:
(1) compared with a multi-step phase shift method, the method greatly reduces the number of the acquired images, reduces the time consumption of the acquired images and reduces the calculation amount; (2) compared with mathematical transformation modes such as Fourier transformation and the like, the method has the advantages of no large and complex operation, low calculation cost and high speed; (3) compared with the method for calibrating the gamma value, the method does not need complex operations such as calibration and the like.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a convolutional neural network in an embodiment of the present invention.
FIG. 3 is a graph of the stripes to be tested in an embodiment of the present invention.
FIG. 4 is a table illustrating the output of the network model in accordance with an embodiment of the present invention.
FIG. 5 is a diagram illustrating the denominator term of the network model output in an embodiment of the present invention.
Fig. 6 is a phase calculated using an arctan function in an embodiment of the present invention.
FIG. 7 is a three-dimensional model obtained by an embodiment of the present invention.
Fig. 8 is a three-dimensional model (with gamma distortion results) obtained using a three-step phase shift calculation.
Fig. 9 shows a three-dimensional model (reference result) obtained by 12-step phase shift.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The present embodiment is a gamma nonlinear error correction method based on deep learning, as shown in fig. 1, the process can be briefly described as follows: establishing a model based on a convolutional neural network; after training, obtaining a numerator denominator term of the calculated phase; and bringing the two terms into an arctangent function to calculate the phase of the object. The method comprises the following specific steps.
Step one, a deep neural network model is established. According to the basic principle of fringe image analysis, for fringe imagesIAnd may be expressed as,
Figure 910549DEST_PATH_IMAGE002
wherein the content of the first and second substances,(x,y)is a coordinate of a pixel, and is,A(x,y)in order to be a background image,B(x,y)a display unit for displaying the image of the modulation degree,φ(x,y)for the phase to be calculated, the phase is calculated by,
Figure DEST_PATH_IMAGE003
wherein orderM=mumerator(x,y)D=denominator(x,y)
For the constructed model, the input data isI n Wherein (a)n=0,1, 2), the output is the numerator term required to calculate the phase. Model structure and sourceAs described in FIG. 2, an image is inputI n The method is divided into four ways to be carried out simultaneously:
the first path sequentially passes through a convolution layer I, a pooling layer I, a residual block I and a convolution layer II;
the second path sequentially passes through a convolution layer III, a pooling layer II, a residual block II, an upper sampling layer I and a convolution layer IV;
the third path sequentially passes through a convolution layer five, a pooling layer three, a residual block three, an upper sampling layer two, an upper sampling layer three and a convolution layer six;
the fourth path sequentially passes through a convolution layer seven, a pooling layer four, a residual block four, an upper sampling layer five, an upper sampling layer six and a convolution layer eight;
and finally, collecting the four paths of results through the first connecting layer, and obtaining a result with the output channel number of 2 through the ninth convolution layer.
Reference to the construction of each residual block ("Deep residual learning for image recognition", authors k.he, etc.). The parameter H represents the height (pixel) of the image, the parameter W represents the width (pixel) of the image, and C represents the number of channels. Since there are 2 output channels, the number of convolutional layer channels that are finally output is 2. Wherein one channel is a molecular itemMOne channel is the denominator termD
And step two, generating training data and training a model. By usingNStep-by-step phase shift method, measurementsA different scenario. For each scene, co-filmingNPhase-shifted fringe pattern, the fringe pattern acquired being represented asI t t=1,2,…,TT=s×NThe total number of data in training). By usingNStep-and-shift method for calculating the phase of fringe patternM t And denominator termD t
After the training data is generated, training is carried out according to the following stepsI t As input data, willM t D t And the standard value is sent to the model. Calculating a standard value using the mean square error as a loss functionM t D t And modelOutput ofMout tDout tThe difference between them. And (4) combining a back propagation algorithm, and iteratively optimizing the internal parameters of the model repeatedly until the loss function is converged, wherein the model training is finished at the moment, and two outputs are obtained. In the whole model, except the convolutional layer six, the activation functions used in the other convolutional layers are all linear rectification functions. And when the loss function is iteratively optimized, searching the minimum value of the loss function by adopting an Adam algorithm.
Step three, calculating the phase by using the two outputs obtained after the training and the arctan function of the formula (2)φ(x,y)
To verify the effectiveness of the proposed method, a digital raster projection apparatus was constructed using two cameras (model acA640-750, Balser), a projector (model Lightcraft4500, Ti) and a computer to collect the fringe image. When generating training data, 12-step phase shift is adopted to shoot stripe imageI t And generates its corresponding phase numerator denominator termM t D t There were 1223 groups, of which 1073 were used for training and 150 for validation. After the training, 1 scene which does not appear in the training is selected as a test for verifying the effectiveness of the method. Fig. 3 is a fringe graph input during testing, fig. 4 and 5 are numerator and denominator terms output by the network model, respectively, and fig. 6 is a phase calculated by an arctangent function.
In order to compare the effects of the method, the results obtained in the examples, the results obtained by the three-step phase shift method (with the gamma distortion result), and the results obtained by the 12-step phase shift calculation (as the reference results) are compared, and the phase information is converted into three-dimensional information to be displayed. FIGS. 7 to 9 show three-dimensional results obtained by 3 methods, in which FIG. 7 shows the method of the present embodiment, FIG. 8 shows a three-step phase shift method, and FIG. 9 shows a 12-step phase shift method. It can be seen that the surface roughness of fig. 8 has many stripes compared to that of fig. 9, which is the gamma induced error, and fig. 7 well restores the three-dimensional information of the original object. Meanwhile, only 3 figures are used in fig. 7, and 12 figures are used in the reference result of fig. 9. In summary, the method of the embodiment is a gamma nonlinear error correction method based on deep learning, and has the advantages of small calculation amount, high speed and simple operation.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (3)

1. A gamma nonlinear error correction method based on deep learning is characterized by comprising the following steps:
establishing a model based on a convolutional neural network, wherein three high-frequency three-step phase shift fringe images are input into the model, and a numerator item for calculating a phase is output;
the processing process of the three images in the model is as follows: the method comprises the following steps of dividing four paths into four paths and simultaneously carrying out the four paths, wherein the first path sequentially passes through a convolution layer I, a pooling layer I, a residual block I and a convolution layer II; the second path sequentially passes through a convolution layer III, a pooling layer II, a residual block II, an upper sampling layer I and a convolution layer IV; the third path sequentially passes through a convolution layer five, a pooling layer three, a residual block three, an upper sampling layer two, an upper sampling layer three and a convolution layer six; the fourth path sequentially passes through a convolution layer seven, a pooling layer four, a residual block four, an upper sampling layer five, an upper sampling layer six and a convolution layer eight; finally, the four paths of results are collected through the first connecting layer and pass through the ninth convolution layer together to obtain a result with the output channel number of 2; wherein the first channel is a numerator term and the second channel is a denominator term; each path of processing adopts multi-scale down sampling, the first path is 1 time down sampling, the second path is 1/2 down sampling, the third path is 1/4 down sampling, and the fourth path is 1/8 down sampling;
acquiring training data by adopting an N-step phase shift method, wherein each fringe pattern in the N-step phase shift is used as an input image of the model, and a numerator of a phase calculated by the N-step phase shift method is used as a standard value of model training; calculating the difference between a standard value and the output of the model by using the mean square error as a loss function, repeatedly and iteratively optimizing the internal parameters of the model by combining a back propagation algorithm until the loss function is converged, and training the model to obtain two outputs, namely a required numerator and denominator of a calculated phase;
and step three, substituting the numerator and denominator terms into an arctangent function to calculate the phase of the object.
2. The deep learning-based gamma nonlinear error correction method of claim 1, wherein six convolutional layers used in the model do not set any activation function, and the rest convolutional layers use linear rectification function as activation function.
3. The gamma nonlinear error correction method based on deep learning of claim 1, wherein: in the third step, the arctangent function is,
Figure 481610DEST_PATH_IMAGE002
whereinφIn order to be the phase position,(x,y)are the image coordinates.
CN202010695724.8A 2020-07-20 2020-07-20 Gamma nonlinear error correction method based on deep learning Active CN111829458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010695724.8A CN111829458B (en) 2020-07-20 2020-07-20 Gamma nonlinear error correction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010695724.8A CN111829458B (en) 2020-07-20 2020-07-20 Gamma nonlinear error correction method based on deep learning

Publications (2)

Publication Number Publication Date
CN111829458A CN111829458A (en) 2020-10-27
CN111829458B true CN111829458B (en) 2022-05-13

Family

ID=72923595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010695724.8A Active CN111829458B (en) 2020-07-20 2020-07-20 Gamma nonlinear error correction method based on deep learning

Country Status (1)

Country Link
CN (1) CN111829458B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634180B (en) * 2021-03-05 2021-08-03 浙江大华技术股份有限公司 Image enhancement method, image enhancement device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337071A (en) * 2013-06-19 2013-10-02 北京理工大学 Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
CN110425986A (en) * 2019-07-17 2019-11-08 北京理工大学 Three-dimensional computations imaging method and device based on single pixel sensor

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105606038B (en) * 2015-09-09 2018-11-27 深圳大学 A kind of gamma non-linear correction method of phase measuring profilometer, system
JP7242185B2 (en) * 2018-01-10 2023-03-20 キヤノン株式会社 Image processing method, image processing apparatus, image processing program, and storage medium
CN108830856B (en) * 2018-05-25 2021-09-10 南京理工大学 GA automatic segmentation method based on time series SD-OCT retina image
CN109409190A (en) * 2018-08-21 2019-03-01 南京理工大学 Pedestrian detection method based on histogram of gradients and Canny edge detector
CN109253708B (en) * 2018-09-29 2020-09-11 南京理工大学 Stripe projection time phase unwrapping method based on deep learning
CN110472180B (en) * 2019-06-27 2023-07-21 南京市公安局鼓楼分局 Method for calculating trajectory of round projectile striking window perpendicular to ground
CN110686652B (en) * 2019-09-16 2021-07-06 武汉科技大学 Depth measurement method based on combination of depth learning and structured light
CN111402240A (en) * 2020-03-19 2020-07-10 南京理工大学 Three-dimensional surface type measuring method for single-frame color fringe projection based on deep learning
CN111351450B (en) * 2020-03-20 2021-09-28 南京理工大学 Single-frame stripe image three-dimensional measurement method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337071A (en) * 2013-06-19 2013-10-02 北京理工大学 Device and method for structure-reconstruction-based subcutaneous vein three-dimensional visualization
CN110425986A (en) * 2019-07-17 2019-11-08 北京理工大学 Three-dimensional computations imaging method and device based on single pixel sensor

Also Published As

Publication number Publication date
CN111829458A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
US10008005B2 (en) Measurement system and method for measuring multi-dimensions
JP5029618B2 (en) Three-dimensional shape measuring apparatus, method and program by pattern projection method
KR101733228B1 (en) Apparatus for three dimensional scanning with structured light
US20120176478A1 (en) Forming range maps using periodic illumination patterns
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
Vargas et al. Hybrid calibration procedure for fringe projection profilometry based on stereo vision and polynomial fitting
Gai et al. A novel dual-camera calibration method for 3D optical measurement
WO2011145285A1 (en) Image processing device, image processing method and program
Loaiza et al. Multi-camera calibration based on an invariant pattern
Guo et al. Automatic and rapid whole-body 3D shape measurement based on multinode 3D sensing and speckle projection
CN111462246B (en) Equipment calibration method of structured light measurement system
CN111829458B (en) Gamma nonlinear error correction method based on deep learning
Ye et al. Accurate and dense point cloud generation for industrial Measurement via target-free photogrammetry
CN114792345A (en) Calibration method based on monocular structured light system
CN112685979B (en) Fluid density field generation method and device based on deep learning
CN102798354A (en) Binary stripe stack based sinusoidal grating generation method
Zhang et al. Iterative projector calibration using multi-frequency phase-shifting method
CN112985258A (en) Calibration method and measurement method of three-dimensional measurement system
Wang et al. Implementation and experimental study on fast object modeling based on multiple structured stripes
Sun et al. A new method of camera calibration based on the segmentation model
Ashdown et al. Robust calibration of camera-projector system for multi-planar displays
Wang et al. Self-registration shape measurement based on fringe projection and structure from motion
Gay et al. Factorization based structure from motion with object priors
Zhou et al. Distortion correction using a single image based on projective invariability and separate model
Liu et al. Comparison study of three camera calibration methods considering the calibration board quality and 3D measurement accuracy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221206

Address after: Floor 4, Unit 2, Building B4, Science and Technology Complex, No. 8, Jialing Jiangdong Street, Shazhou Street, Jianye District, Nanjing City, Jiangsu Province, 210000

Patentee after: Nanjing Guangyu Vision Technology Co.,Ltd.

Address before: 4 / F, unit 2, building B4, science and technology complex, No. 8, Jialing Jiangdong Street, Jianye District, Nanjing City, Jiangsu Province 210000

Patentee before: Nanjing University of technology intelligent computing Imaging Research Institute Co.,Ltd.

TR01 Transfer of patent right