CN106960456A - A kind of method that fisheye camera calibration algorithm is evaluated - Google Patents

A kind of method that fisheye camera calibration algorithm is evaluated Download PDF

Info

Publication number
CN106960456A
CN106960456A CN201710192325.8A CN201710192325A CN106960456A CN 106960456 A CN106960456 A CN 106960456A CN 201710192325 A CN201710192325 A CN 201710192325A CN 106960456 A CN106960456 A CN 106960456A
Authority
CN
China
Prior art keywords
layer
output
convolution
feature
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710192325.8A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Full Image Technology Co Ltd
Original Assignee
Changsha Full Image Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Full Image Technology Co Ltd filed Critical Changsha Full Image Technology Co Ltd
Priority to CN201710192325.8A priority Critical patent/CN106960456A/en
Publication of CN106960456A publication Critical patent/CN106960456A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of method that fisheye camera calibration result is evaluated, it is related to computer vision field, comprises the following steps:S1:Obtain the Intrinsic Matrix and distortion factor of the camera after demarcation;S2:Train neural network model;S3:Based on test data set and the neural network model trained, the output of test data set is obtained.Corresponding pixel point coordinates under characteristic point world coordinates and camera of the invention based on the fish-eye intrinsic parameter demarcated and distortion factor and given scaling board, carry out the training of neutral net, neutral net has powerful non-linear mapping capability, it can be expressed by training as the void of complex mathematical model, meet the requirement of camera model Nonlinear Mapping in the big visual field camera calibration such as fish eye lens, the nonlinear distortion varying model of complexity need not be so set up, can be with the degree of accuracy of objective and accurate evaluation camera calibration result.

Description

A kind of method that fisheye camera calibration algorithm is evaluated
Technical field
The present invention relates to the calibration algorithm evaluation of computer vision field, more particularly to camera calibration technical field.
Background technology
In image measurement process and machine vision applications, for determine space object surface point three-dimensional geometry position and Its correlation in the picture between corresponding points, it is necessary to set up the geometrical model of camera imaging, these geometrical model parameters are just It is camera parameter.These parameters must can just be obtained by experiment with calculating in most conditions, and this solves the mistake of parameter Journey is just referred to as camera calibration.The purpose of camera calibration obtains internal reference and outer ginseng coefficient the camera shooting to after of camera Image is corrected, and obtains the relatively small image that distorts.
The method of usual camera calibration is:The a series of pictures of scaling board is gathered, and angle point grid is carried out to every pictures, The further sub-pix information for extracting angle point, then starts the demarcation of camera, finally calibration result can be evaluated.Camera The method of calibration algorithm evaluation is the camera interior and exterior parameter by obtaining, and the three-dimensional point to space carries out projection calculating again, The coordinate of space three-dimensional point subpoint new on image is obtained, calculates inclined between projection coordinate and sub-pix angular coordinate Difference, deviation is smaller, and calibration result is better.
Fisheye camera is while big visual field visual field image pickup scope is provided, and incident is the flake distortion of image.Fish The image that eye camera is shot is smaller in central point distortion, and is outwards distorted by central point and understand increasing, and this camera is non-linear Distortion needs to set up complicated camera model, adds somewhat to the degree of difficulty of demarcation, chooses different distortion models.
The content of the invention
The present invention is to overcome above-mentioned situation not enough, it is desirable to provide a kind of nonlinear distortion varying model that need not set up complexity is just Can be in the method for objective and accurate evaluation camera calibration result precision.
A kind of method that fisheye camera calibration algorithm is evaluated, comprises the following steps:
S1:Obtain the Intrinsic Matrix and distortion factor of the camera after demarcation;Camera calibration to be evaluated is used first Algorithm is demarcated to camera, obtains the Intrinsic Matrix and distortion factor of the camera after demarcation, then will be special on scaling board The intrinsic parameter M and distortion factor K for levying position coordinates P a little and the camera obtained by calibration algorithm combine composing training Data set { P, M, K };
S2:Train neural network model, the neural network model for it is non-it is full connection and same layer in some neurons it Between connection weight share;The S2 includes S201, S202, S203;
S201:Build a neural network model;The S201 steps are specially:The training dataset that step S1 is obtained { P, M, K } builds a neural network model as network inputs, and the neutral net uses 5 layers of neutral net, is defeated respectively Enter layer, the first convolution sample level, the second convolution sample level, full linking layer and output layer;First will be defeated in the first convolution sample level Enter from this layer set different convolution kernels and can biasing put carry out convolution, convolution after produce several features, then feature is pressed Put according to the pond scale size progress characteristic value summation of setting, weighted value, biasing, the layer is obtained finally by Sigmoid functions Output, the second convolution sample level operated with the first convolution sample level identical, and difference is two layers of convolution used Core, pond scale size and biasing are different, and the output of convolution sample level twice is Feature Mapping figure, and full linking layer adopts convolution The feature forward-propagating output characteristic vector of sample layer, while backpropagation operation can also be carried out, by input in output layer Characteristic vector specifies output by the size of output label.
S202:Convolution sampling layer parameter is set;The S202 steps are specially:In a convolutional layer l, the input of input layer Or the ith feature of last layerConvolution is carried out by a convolution kernel that can learn, then can by activation primitive With j-th of the feature exportedEach outputIt is probably the combination multiple inputs of convolutionValue, specific calculating side Method is as follows:
Wherein, i, j represent that Feature Mapping is numbered on last layer and current layer respectively, MjRepresent the input feature vector set chosen A subset,Convolution kernel related between l layers of j-th of feature and l-1 layers of ith feature is represented,Represent L layers of the corresponding additional biasing of j-th of feature, * represents convolution operation, and activation primitive f () will using sigmoid functions Output squeezing is to [0,1];Followed by one sub-sampling after convolution, computing formula is as follows:
Wherein, down () represents a down-sampling function;
S203:Depth convolutional neural networks are trained using training dataset;
S3:Based on test data set and the neural network model trained, the output of test data set is obtained;
The S3 is specially:Test sample is inputted to the coordinate that the neural network model trained calculates the pixel under camera Value, then calculates output valve and the error of actual value;Computing formula is as follows:
Wherein, D represents the range difference of output valve and actual value, and e represents relative error magnitudes, and N represents the number of pixel, (x, y) represents the output pixel coordinate by neural computing, (xr,yr) the true coordinate value of pixel is represented, Avg represents phase The evaluation of estimate of machine calibration result.
Further, the down-sampling function of the sub-sampling uses Max-Pooling ponds pattern, and pond core size is 2* 2, step-length is 2.
Further, the S203 steps can specifically be divided into following two stages:
First stage:The propagated forward stage
To given training datasetAll training datas are concentratedIt is input to depth convolutional Neural The input layer of network, output layer is sent to by conversion successively, calculates and ownsCorresponding reality outputMeter Calculate reality outputWith ideal outputBetween error, using square error cost function, the error of n-th of training data It is expressed as:
Wherein, K represents the dimension of output data,Represent the of the corresponding preferable output data of n-th of training data K is tieed up,Represent k-th of output of the corresponding network output of n-th of training data;
Second stage:The back-propagating stage
The error that backpropagation is returned is the sensitivity δ of the biasing of each neuron, convolutional layer reversal error propagation formula For:
Wherein, ° each element multiplication is represented, l represents the number of plies, m, n represents reflecting for feature on last layer and current layer respectively Penetrate numbering,The sensitivity of n-th of neurode on l layers is represented,The weights of down-sampling layer are represented, are to train Constant, up () represent up-sampling operation, ' represent transposition,WithRepresent the corresponding weights of l n-th of feature of layer and inclined Put,Represent l-1 layers of n-th of feature;
The reversal error propagation formula of pond layer is calculated as follows:
Wherein, M represents the set of input feature vector,Represent l+1 layer n-th feature and l layers of m-th of feature it Between related convolution kernel,The sensitivity of l+1 layers of n-th of neurode is represented,Represent m-th of l layers nerve The sensitivity of node;
Finally, right value update is carried out with δ rules to each neuron;The partial derivative formula of calculating biasing and convolution kernel is such as Under:
Wherein, E represents error cost function,ForEach zonule (patch) during convolution is calculated, U, v represent sensitivity matrix respectivelyIn element position;Using above-mentioned convolution kernel and the local derviation of biasing, convolution kernel is updated and inclined Put.
The present invention is primarily based on the spy of the fish-eye intrinsic parameter demarcated and distortion factor and given scaling board Levy corresponding pixel point coordinates under a world coordinates and camera, carry out the training of neutral net, neutral net has powerful non- Linear Mapping ability, can be expressed by training as the void of complex mathematical model, meet the big visual field camera mark such as fish eye lens The requirement of camera model Nonlinear Mapping in fixed, need not so set up the nonlinear distortion varying model of complexity, can be with objective and accurate Evaluation camera calibration result the degree of accuracy.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also To obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is the method flow diagram of fisheye camera calibration algorithm evaluation in the embodiment of the present invention;
Fig. 2 is the method flow diagram of S2 step training neural network models in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
As shown in figure 1, a kind of evaluation method flow chart of fisheye camera calibration result of the invention specifically includes following steps:
S1:Obtain the Intrinsic Matrix and distortion factor of the camera after demarcation
First, camera is demarcated using camera calibration algorithm to be evaluated, obtains the internal reference of the camera after demarcation Matrix number and distortion factor.Then, the camera obtained by the position coordinates P of characteristic point on scaling board and by calibration algorithm Intrinsic parameter M and distortion factor K combine composing training data set { P, M, K }.
S2:Train neural network model
In embodiments of the present invention, the non-neural network model connected entirely, and some neurons in same layer are used Between connection weight be shared, the network structure that this non-full connection and weight are shared makes the model be more closely similar to biological god Through network, the complexity of network model is reduced, the quantity of weight is reduced.
As shown in Fig. 2 the training of neural network model comprises the following steps:
S201:Build a neural network model.
The training dataset { P, M, K } that step S1 is obtained builds a neural network model as the input of network, should Neutral net uses 5 layers of neutral net, is input layer, the first convolution sample level, the second convolution sample level, full link respectively Layer, output layer, wherein, the first convolution sample level first by input with this layer setting different convolution kernels and can biasing put progress Several features are produced after convolution, convolution, characteristic value summation, weighting then are carried out according to the pond scale size of setting to feature Value, biasing is put, and the output of this layer is obtained finally by a Sigmoid function, and the second convolution sample level is carried out and the first convolution Sample level identical is operated, and difference is two layers convolution kernel used, pond scale size and biases different, two secondary volumes The output of product sample level is Feature Mapping figure, and full linking layer is vectorial by the feature forward-propagating output characteristic of convolution sample level, together When can also carry out backpropagation operation, in output layer by the characteristic vector of input by output label size specify output.
Only provide an example of depth convolutional neural networks model above, actually depth convolutional neural networks model Building mode can according to application purpose carry out experience setting, including the convolution pond number of plies, entirely link the number of plies, the quantity of convolution kernel It can be configured with the parameter such as size and pond yardstick according to application purpose.
S202:Convolution sampling layer parameter is set.
In a convolutional layer l, the ith feature of the input either last layer of input layerBy a volume that can learn Product core carries out convolution, then passes through an activation primitive, it is possible to j-th of the feature exportedEach outputCan Can be the combination multiple inputs of convolutionValue, circular is as follows:
Wherein, i, j represent that Feature Mapping is numbered on last layer and current layer respectively, MjRepresent the input feature vector set chosen A subset,Convolution kernel related between l layers of j-th of feature and l-1 layers of ith feature is represented,Represent L layers of the corresponding additional biasing of j-th of feature, * represents convolution operation, and activation primitive f () will using sigmoid functions Output squeezing is to [0,1].
After convolution can followed by one sub-sampling, for sub-sampling, there is N number of input feature vector, just have N number of output special Levy, simply each output characteristic diminishes in size, and computing formula is as follows:
Wherein, down () represents a down-sampling function, preferably Max-Pooling ponds pattern, and pond core size is 2* 2, step-length is 2.
Each feature extraction layer (sub-sampling layer) followed by one in depth convolutional neural networks is used for asking local The computation layer (convolutional layer) of average and second extraction, this distinctive structure of feature extraction twice makes network in identification to input Sample has higher distortion tolerance.
S203:Depth convolutional neural networks are trained using training dataset.
Depth convolutional neural networks are inherently a kind of mapping for being input to output, he can learn it is substantial amounts of input with Mapping relations between output, without the accurate mathematical expression formula between any input and output, as long as the mould known to Formula is trained to depth convolutional neural networks, and network just has the mapping ability for being input to output between.Starting training Before, all weights should all carry out random initializtion.
The training method of depth convolutional neural networks can be divided into following two stages:
First stage:The propagated forward stage
To given training datasetAll training datas are concentratedIt is input to depth convolutional Neural The input layer of network, by conversion (convolution sample level 1, convolution sample level 2, full linking layer 1, full linking layer 2) successively, transmission To output layer, calculate and ownCorresponding reality outputCalculate reality outputWith ideal outputBetween mistake Difference, here using square error cost function, the error of n-th of training data is expressed as:
Wherein, K represents the dimension of output data,Represent the of the corresponding preferable output data of n-th of training data K is tieed up,Represent k-th of output of the corresponding network output of n-th of training data.
Second stage:The back-propagating stage
The back-propagating stage is according to the power for adjusting each layer of network before above-mentioned calculating to squared errors methods backpropagation Weight matrix.The error that backpropagation is returned can regard the sensitivity δ of the biasing of each neuron as, and convolutional layer reversal error is passed Broadcasting formula is:
Wherein, ° each element multiplication is represented, l represents the number of plies, m, n represents reflecting for feature on last layer and current layer respectively Penetrate numbering,The sensitivity of n-th of neurode on l layers is represented,The weights of down-sampling layer are represented, are to train Constant, up () represent up-sampling operation, ' represent transposition,WithRepresent the corresponding weights of l n-th of feature of layer and inclined Put,Represent l-1 layers of n-th of feature.The reversal error propagation formula of pond layer is calculated as follows:
Wherein, M represents the set of input feature vector,Represent l+1 layer n-th feature and l layers of m-th of feature it Between related convolution kernel,The sensitivity of l+1 layers of n-th of neurode is represented,Represent m-th of l layers nerve The sensitivity of node.
Finally, right value update is carried out with δ rules to each neuron.I.e. the neuron given to one, obtains its Input, is then zoomed in and out with the δ of this neuron.It is exactly that, for l layers, error is for this with the form statement of vector The derivative of each weights (being combined as matrix) of layer is that the input (output for being equal to last layer) of this layer and the sensitivity of this layer (should The δ of each neuron of layer is combined into a vectorial form) multiplication cross.The partial derivative formula for calculating biasing and convolution kernel is as follows:
Wherein, E represents error cost function,ForEach zonule (patch) during convolution is calculated, U, v represent sensitivity matrix respectivelyIn element position.Using above-mentioned convolution kernel and biasing local derviation, update convolution kernel and Biasing.
The training dataset obtained using step S1, using Hinge loss functions and stochastic gradient descent method to depth Convolutional neural networks are trained, complete when the loss function of entire depth convolutional neural networks tends near locally optimal solution Into training;Wherein locally optimal solution is set manually in advance.
S3:Based on test data set and the neural network model trained, the output of test data set is obtained.
Test sample is inputted to the coordinate value that the neural network model trained calculates the pixel under camera, then calculated defeated Go out the error of value and actual value.Simple output pixel value and the range difference of actual value can not accurately weigh the error of demarcation, Therefore the present invention using the range difference D of output valve and actual value in the ratio shared by entirely demarcation plane as relative error magnitudes e, most Being averaged for all errors is counted afterwards is worth to the evaluation of estimate Avg of camera calibration result.Computing formula is as follows:
Wherein, N represents the number of pixel, and (x, y) represents the output pixel coordinate by neural computing, (xr,yr) Represent the true coordinate value of pixel.
The present invention is primarily based on the spy of the fish-eye intrinsic parameter demarcated and distortion factor and given scaling board Levy corresponding pixel point coordinates under a world coordinates and camera, carry out the training of neutral net, neutral net has powerful non- Linear Mapping ability, can be expressed by training as the void of complex mathematical model, meet the big visual field camera mark such as fish eye lens The requirement of camera model Nonlinear Mapping in fixed, need not so set up the nonlinear distortion varying model of complexity, can be with objective and accurate Evaluation camera calibration result the degree of accuracy.
Above disclosed is only a kind of preferred embodiment of the invention, can not limit the power of the present invention with this certainly Sharp scope, therefore the equivalent variations made according to the claims in the present invention, still belong to the scope that the present invention is covered.

Claims (3)

1. a kind of method that fisheye camera calibration algorithm is evaluated, it is characterised in that comprise the following steps:
S1:Obtain the Intrinsic Matrix and distortion factor of the camera after demarcation;Camera calibration algorithm to be evaluated is used first Camera is demarcated, the Intrinsic Matrix and distortion factor of the camera after demarcation are obtained, then by characteristic point on scaling board Position coordinates P and the intrinsic parameter M and distortion factor K of the camera obtained by calibration algorithm combine composing training data Collect { P, M, K };
S2:Neural network model is trained, the neural network model is between some neuron in non-full connection and same layer Connection weight is shared;The S2 includes S201, S202, S203;
S201:Build a neural network model;The S201 steps are specially:Step S1 is obtained training dataset P, M, K } as network inputs, a neural network model is built, the neutral net uses 5 layers of neutral net, be input respectively Layer, the first convolution sample level, the second convolution sample level, full linking layer and output layer;First will input in the first convolution sample level From this layer set different convolution kernels and can biasing put carry out convolution, convolution after produce several features, then to feature according to The pond scale size of setting carries out characteristic value summation, weighted value, biasing and put, finally by Sigmoid functions obtain this layer it is defeated Go out, the second convolution sample level operated with the first convolution sample level identical, difference be two layers convolution kernel used, Pond scale size and biasing are different, and the output of convolution sample level twice is Feature Mapping figure, and full linking layer samples convolution The feature forward-propagating output characteristic vector of layer, while backpropagation operation can also be carried out, by the spy of input in output layer Levy vector and specify output by the size of output label;
S202:Convolution sampling layer parameter is set;The S202 steps are specially:In a convolutional layer l, the input of input layer or It is the ith feature of last layerConvolution is carried out by a convolution kernel that can learn, then can just be obtained by activation primitive To j-th of feature of outputEach outputIt is probably the combination multiple inputs of convolutionValue, circular is such as Under:
x j l = f ( Σ i ∈ M i x i l - 1 * k i j l + b j l )
Wherein, i, j represent that Feature Mapping is numbered on last layer and current layer respectively, MjRepresent the one of input feature vector set chosen Individual subset,Convolution kernel related between l layers of j-th of feature and l-1 layers of ith feature is represented,Represent l layers The corresponding additional biasing of j-th of feature, * represents convolution operation, and activation primitive f () will be exported using sigmoid functions and pressed It is reduced to [0,1];Followed by one sub-sampling after convolution, computing formula is as follows:
x j l = f ( β j l d o w n ( x j l - 1 ) + b j l )
Wherein, down () represents a down-sampling function;
S203:Depth convolutional neural networks are trained using training dataset;
S3:Based on test data set and the neural network model trained, the output of test data set is obtained;
The S3 is specially:Test sample is inputted to the coordinate value that the neural network model trained calculates the pixel under camera, Then output valve and the error of actual value are calculated;Computing formula is as follows:
D = ( x - x r ) 2 + ( y - y r ) 2
e = D / X 2 + Y 2
A v g = Σ k = 1 N e i N
Wherein, D represents the range difference of output valve and actual value, and e represents relative error magnitudes, and N represents the number of pixel, (x, y) Represent the output pixel coordinate by neural computing, (xr,yr) the true coordinate value of pixel is represented, Avg represents camera calibration As a result evaluation of estimate.
2. the method that fisheye camera calibration algorithm according to claim 1 is evaluated, it is characterised in that under the sub-sampling Sampling function uses Max-Pooling ponds pattern, and pond core size is 2*2, and step-length is 2.
3. the method that fisheye camera calibration algorithm according to claim 1 is evaluated, it is characterised in that the S203 steps tool Body can be divided into following two stages:
First stage:The propagated forward stage
To given training datasetAll training datas are concentratedIt is input to depth convolutional neural networks Input layer, be sent to output layer by conversion successively, calculate and ownCorresponding reality outputCalculate actual OutputWith ideal outputBetween error, using square error cost function, the error of n-th of training data is expressed as:
E n = 1 2 Σ k = 1 K ( y e n ( k ) - O e n ( k ) ) 2 = 1 2 | | y n - O n | | 2 2
Wherein, K represents the dimension of output data,The kth dimension of the corresponding preferable output data of n-th of training data is represented,Represent k-th of output of the corresponding network output of n-th of training data;
Second stage:The back-propagating stage
The error that backpropagation is returned is the sensitivity δ of the biasing of each neuron, and convolutional layer reversal error propagation formula is:
δ n l = β n l + 1 ( f ′ ( μ n l ) o u p ( δ n l + 1 ) w i t h μ n l = W n l x n l - 1 + b n l
Wherein, ° each element multiplication of expression, l represents the number of plies, m, and n represents that the mapping of feature on last layer and current layer is compiled respectively Number,The sensitivity of n-th of neurode on l layers is represented,The weights of down-sampling layer are represented, are trainable normal Number, up () represents up-sampling operation, ' transposition is represented,WithThe corresponding weights of l n-th of feature of layer and biasing are represented,Represent l-1 layers of n-th of feature;
The reversal error propagation formula of pond layer is calculated as follows:
δ m l = Σ m = 1 M δ n l + 1 * k m n l + 1
Wherein, M represents the set of input feature vector,Represent phase between l+1 layers of n-th of feature and l layers of m-th of feature The convolution kernel of pass,The sensitivity of l+1 layers of n-th of neurode is represented,Represent l layers of m-th of neurode Sensitivity;
Finally, right value update is carried out with δ rules to each neuron;The partial derivative formula for calculating biasing and convolution kernel is as follows:
∂ E ∂ b n = Σ u v ( δ n l ) u v
∂ E ∂ k m n l = Σ u v ( δ n l ) u v ( p m l - 1 ) u v
Wherein, E represents error cost function,ForCalculate each zonule (patch) during convolution, u, v points Sensitivity matrix is not representedIn element position;Using above-mentioned convolution kernel and the local derviation of biasing, convolution kernel and biasing are updated.
CN201710192325.8A 2017-03-28 2017-03-28 A kind of method that fisheye camera calibration algorithm is evaluated Pending CN106960456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710192325.8A CN106960456A (en) 2017-03-28 2017-03-28 A kind of method that fisheye camera calibration algorithm is evaluated

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710192325.8A CN106960456A (en) 2017-03-28 2017-03-28 A kind of method that fisheye camera calibration algorithm is evaluated

Publications (1)

Publication Number Publication Date
CN106960456A true CN106960456A (en) 2017-07-18

Family

ID=59470607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710192325.8A Pending CN106960456A (en) 2017-03-28 2017-03-28 A kind of method that fisheye camera calibration algorithm is evaluated

Country Status (1)

Country Link
CN (1) CN106960456A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977605A (en) * 2017-11-08 2018-05-01 清华大学 Ocular Boundary characteristic extraction method and device based on deep learning
CN109859263A (en) * 2019-01-26 2019-06-07 中北大学 One kind being based on fish-eye wide viewing angle localization method
CN110908919A (en) * 2019-12-02 2020-03-24 上海市软件评测中心有限公司 Response test system based on artificial intelligence and application thereof
CN110969657A (en) * 2018-09-29 2020-04-07 杭州海康威视数字技术股份有限公司 Gun and ball coordinate association method and device, electronic equipment and storage medium
CN111027522A (en) * 2019-12-30 2020-04-17 华通科技有限公司 Bird detection positioning system based on deep learning
CN111275768A (en) * 2019-12-11 2020-06-12 深圳市德赛微电子技术有限公司 Lens calibration method and system based on convolutional neural network
DE102018132649A1 (en) 2018-12-18 2020-06-18 Connaught Electronics Ltd. Method for calibrating a detection area of a camera system using an artificial neural network; Control unit; Driver assistance system and computer program product
CN112907462A (en) * 2021-01-28 2021-06-04 黑芝麻智能科技(上海)有限公司 Distortion correction method and system for ultra-wide-angle camera device and shooting device comprising distortion correction system
CN113033777A (en) * 2021-03-16 2021-06-25 同济大学 Vehicle-mounted atmosphere lamp chromaticity calibration method based on neural network calibration model
CN114241031A (en) * 2021-12-22 2022-03-25 华南农业大学 Fish body ruler measurement and weight prediction method and device based on double-view fusion
CN116625409A (en) * 2023-07-14 2023-08-22 享刻智能技术(北京)有限公司 Dynamic positioning performance evaluation method, device and system
CN116863429A (en) * 2023-07-26 2023-10-10 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area
CN117495741A (en) * 2023-12-29 2024-02-02 成都货安计量技术中心有限公司 Distortion restoration method based on large convolution contrast learning
CN118037863A (en) * 2024-04-11 2024-05-14 四川大学 Neural network optimization automatic zooming camera internal parameter calibration method based on visual field constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127789A (en) * 2016-07-04 2016-11-16 湖南科技大学 Stereoscopic vision scaling method in conjunction with neutral net Yu virtual target
CN106373160A (en) * 2016-08-31 2017-02-01 清华大学 Active camera target positioning method based on depth reinforcement learning
CN106530284A (en) * 2016-10-21 2017-03-22 广州视源电子科技股份有限公司 Solder joint type detection method and apparatus based on image identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127789A (en) * 2016-07-04 2016-11-16 湖南科技大学 Stereoscopic vision scaling method in conjunction with neutral net Yu virtual target
CN106373160A (en) * 2016-08-31 2017-02-01 清华大学 Active camera target positioning method based on depth reinforcement learning
CN106530284A (en) * 2016-10-21 2017-03-22 广州视源电子科技股份有限公司 Solder joint type detection method and apparatus based on image identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李卫.: "深度学习在图像识别中的研究与应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977605A (en) * 2017-11-08 2018-05-01 清华大学 Ocular Boundary characteristic extraction method and device based on deep learning
CN110969657A (en) * 2018-09-29 2020-04-07 杭州海康威视数字技术股份有限公司 Gun and ball coordinate association method and device, electronic equipment and storage medium
CN110969657B (en) * 2018-09-29 2023-11-03 杭州海康威视数字技术股份有限公司 Gun ball coordinate association method and device, electronic equipment and storage medium
DE102018132649A1 (en) 2018-12-18 2020-06-18 Connaught Electronics Ltd. Method for calibrating a detection area of a camera system using an artificial neural network; Control unit; Driver assistance system and computer program product
CN109859263A (en) * 2019-01-26 2019-06-07 中北大学 One kind being based on fish-eye wide viewing angle localization method
CN110908919A (en) * 2019-12-02 2020-03-24 上海市软件评测中心有限公司 Response test system based on artificial intelligence and application thereof
CN111275768A (en) * 2019-12-11 2020-06-12 深圳市德赛微电子技术有限公司 Lens calibration method and system based on convolutional neural network
CN111027522B (en) * 2019-12-30 2023-09-01 华通科技有限公司 Bird detection positioning system based on deep learning
CN111027522A (en) * 2019-12-30 2020-04-17 华通科技有限公司 Bird detection positioning system based on deep learning
CN112907462A (en) * 2021-01-28 2021-06-04 黑芝麻智能科技(上海)有限公司 Distortion correction method and system for ultra-wide-angle camera device and shooting device comprising distortion correction system
CN113033777A (en) * 2021-03-16 2021-06-25 同济大学 Vehicle-mounted atmosphere lamp chromaticity calibration method based on neural network calibration model
CN113033777B (en) * 2021-03-16 2022-10-14 同济大学 Vehicle-mounted atmosphere lamp chromaticity calibration method based on neural network calibration model
CN114241031B (en) * 2021-12-22 2024-05-10 华南农业大学 Fish body ruler measurement and weight prediction method and device based on double-view fusion
CN114241031A (en) * 2021-12-22 2022-03-25 华南农业大学 Fish body ruler measurement and weight prediction method and device based on double-view fusion
CN116625409A (en) * 2023-07-14 2023-08-22 享刻智能技术(北京)有限公司 Dynamic positioning performance evaluation method, device and system
CN116625409B (en) * 2023-07-14 2023-10-20 享刻智能技术(北京)有限公司 Dynamic positioning performance evaluation method, device and system
CN116863429A (en) * 2023-07-26 2023-10-10 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area
CN116863429B (en) * 2023-07-26 2024-05-31 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area
CN117495741B (en) * 2023-12-29 2024-04-12 成都货安计量技术中心有限公司 Distortion restoration method based on large convolution contrast learning
CN117495741A (en) * 2023-12-29 2024-02-02 成都货安计量技术中心有限公司 Distortion restoration method based on large convolution contrast learning
CN118037863A (en) * 2024-04-11 2024-05-14 四川大学 Neural network optimization automatic zooming camera internal parameter calibration method based on visual field constraint

Similar Documents

Publication Publication Date Title
CN106960456A (en) A kind of method that fisheye camera calibration algorithm is evaluated
CN106920215A (en) A kind of detection method of panoramic picture registration effect
CN106910192A (en) A kind of image syncretizing effect appraisal procedure based on convolutional neural networks
CN106952220A (en) A kind of panoramic picture fusion method based on deep learning
CN106934765A (en) Panoramic picture fusion method based on depth convolutional neural networks Yu depth information
CN106920224A (en) A kind of method for assessing stitching image definition
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN106504248A (en) Vehicle damage method of discrimination based on computer vision
CN105528638B (en) The method that gray relative analysis method determines convolutional neural networks hidden layer characteristic pattern number
CN111985543B (en) Construction method, classification method and system of hyperspectral image classification model
CN107679477A (en) Face depth and surface normal Forecasting Methodology based on empty convolutional neural networks
CN107292319A (en) The method and device that a kind of characteristic image based on deformable convolutional layer is extracted
CN111797717A (en) High-speed high-precision SAR image ship detection method
CN106023154B (en) Multidate SAR image change detection based on binary channels convolutional neural networks
CN106067161A (en) A kind of method that image is carried out super-resolution
CN106600595A (en) Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm
CN113743417A (en) Semantic segmentation method and semantic segmentation device
CN107103285A (en) Face depth prediction approach based on convolutional neural networks
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN110866364A (en) Ground surface temperature downscaling method based on machine learning
CN110119805B (en) Convolutional neural network algorithm based on echo state network classification
CN107816987A (en) A kind of method for recognising star map based on cobweb pattern and convolutional neural networks
CN112180369B (en) Depth learning-based sea surface wind speed inversion method for one-dimensional synthetic aperture radiometer
CN115331104A (en) Crop planting information extraction method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170718

RJ01 Rejection of invention patent application after publication