CN114666573A - Light field camera calibration method and system - Google Patents

Light field camera calibration method and system Download PDF

Info

Publication number
CN114666573A
CN114666573A CN202210293143.0A CN202210293143A CN114666573A CN 114666573 A CN114666573 A CN 114666573A CN 202210293143 A CN202210293143 A CN 202210293143A CN 114666573 A CN114666573 A CN 114666573A
Authority
CN
China
Prior art keywords
image data
calibration
error
similarity
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210293143.0A
Other languages
Chinese (zh)
Inventor
温建伟
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202210293143.0A priority Critical patent/CN114666573A/en
Publication of CN114666573A publication Critical patent/CN114666573A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a system for calibrating a light field camera, wherein the method comprises the following steps: acquiring training data, wherein the training data comprises standard image data and error image data; training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data; acquiring image data to be processed through an imaging camera in a light field camera array; and inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera. The calibration data generated by the method is efficient, convenient and high in accuracy and universality.

Description

Light field camera calibration method and system
Technical Field
The present application relates to the field of image processing, and more particularly, to a method and system for calibrating a light field camera.
Background
The light field imaging technology can simultaneously record the space and the angle information of light, breaks through the limitation of conventional lens imaging, and along with the development of modern science and technology, the more and more enterprises and individuals of light field camera use, under the condition that low light and influence high-speed removal, can accurately take out clear photo diagonally. The light field data of the light field camera can be used for realizing the computational imaging technologies such as digital refocusing, field depth expansion, scene depth calculation, scene three-dimensional reconstruction and the like, and the method is widely applied to the fields of computer vision and computational imaging. The error analysis of the light field imaging system is an important prerequisite for light field data calibration and realization of a computational imaging technology.
However, the error analysis of the existing light field camera only considers the errors on the image structure, and ignores the errors caused by the contour feature and the direction feature, so that the accuracy of the calibration result is low.
Disclosure of Invention
The embodiment of the invention aims to provide a light field camera calibration method and a light field camera calibration system, and the generated calibration data is efficient, convenient and fast and has higher precision and universality. The specific technical scheme is as follows:
in a first aspect of embodiments of the present invention, a light field camera calibration method is provided, including: acquiring training data, wherein the training data comprises standard image data and error image data; training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data; acquiring image data to be processed through an imaging camera in a light field camera array; and inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera. The calibration data generated by the method is efficient, convenient and highly accurate and universal.
Optionally, the distance error between the standard image data and the error image data comprises: calculating a contour calibration value and a structure calibration value between the standard image data and the error image data; determining the distance error, offset error, and angle error based on the profile calibration value and the structure calibration value.
Optionally, the calculating a contour calibration value between the standard image data and the error image data includes: calculating relative contour feature similarity and direction feature similarity between the standard image data and the error image data; calculating a contour standard value based on the relative contour feature similarity and the direction feature similarity; based on the profile standard values, profile calibration values are calculated.
Optionally, the calculating the relative contour feature similarity and the direction feature similarity between the standard image data and the error image data includes: acquiring the standard imageIn the data S (NxM) and the error image data E, the sub-contour feature g (i, j) and the sub-direction feature a (i, j) of each pixel, wherein 1 < i < N and 1 < a < M; calculating a contour feature G of the standard image data S relative to the error image data ESE(i, j) and orientation feature ASE(i, j); calculating the similarity of the contour features and the similarity of the direction features by the following formulas:
Figure BDA0003561029880000021
Figure BDA0003561029880000022
wherein,
Figure BDA0003561029880000023
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000024
indicates the degree of similarity of directional features, GSE(i, j) represents a profile feature, Γg、κg、σg、Γα、καAnd σαBoth represent tangent parameters.
Optionally, the calculating a contour standard value based on the relative contour feature similarity and the direction feature similarity includes: calculating the profile standard value by the following formula:
Figure BDA0003561029880000025
wherein,
Figure BDA0003561029880000026
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000027
indicating the directional feature similarity.
Optionally, the calculating a structural calibration value between the standard image data and the error image data includes: calculating brightness similarity, contrast similarity and structural similarity between the standard image data and the error image data; and calculating a structural calibration value based on the brightness similarity, the contrast similarity and the structural similarity.
Optionally, the calculating the brightness similarity, the contrast similarity and the structural similarity between the standard image data and the error image data includes: calculating a gray-scale average value mu of the standard image data SSSum gray standard deviation σS(ii) a Calculating a gray-scale average value mu of the error image data EESum gray standard deviation σE(ii) a Calculating a gray-scale covariance σ between the standard image data S and the error image data ESE(ii) a The brightness similarity, the contrast similarity and the structural similarity are calculated by the following formulas:
Figure BDA0003561029880000031
Figure BDA0003561029880000032
Figure BDA0003561029880000033
wherein l (S, E) represents brightness similarity, C (S, E) represents contrast similarity, S (S, E) represents structure similarity, C1、C2And C3Is the bias term.
Optionally, the inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera includes: inputting the image data to be processed into a trained calibration model to generate distance calibration data, offset calibration data and angle calibration data; the distance calibration data, the offset calibration data, and the angle calibration data constitute calibration data of the imaging camera.
In a further aspect of embodiments of the present invention, there is provided a light field camera calibration system, comprising: the training data acquisition module is used for acquiring training data, wherein the training data comprises standard image data and error image data; a calibration model training module for training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data; the image to be processed acquisition module is used for acquiring image data to be processed through an imaging camera in the light field camera array; and the calibration data generation module is used for inputting the image data to be processed into the calibration model and generating the calibration data of the imaging camera.
Optionally, the calibration model training module is specifically configured to: calculating a contour calibration value and a structure calibration value between the standard image data and the error image data; determining the distance error, offset error, and angle error based on the profile calibration value and the structure calibration value.
Optionally, the calibration model training module is further configured to: calculating relative contour feature similarity and direction feature similarity between the standard image data and the error image data; calculating a contour standard value based on the relative contour feature similarity and the direction feature similarity; based on the profile standard values, profile calibration values are calculated.
Optionally, the calibration model training module is further configured to: acquiring a sub-contour feature g (i, j) and a sub-direction feature alpha (i, j) of each pixel in the standard image data S (N multiplied by M) and the error image data E, wherein i is more than 1 and less than N, and alpha is more than 1 and less than M; calculating a contour feature G of the standard image data S relative to the error image data ESE(i, j) and orientation feature ASE(i, j); calculating the similarity of the contour features and the similarity of the direction features by the following formulas:
Figure BDA0003561029880000041
Figure BDA0003561029880000042
wherein,
Figure BDA0003561029880000043
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000044
indicates the degree of similarity of directional features, GSE(i, j) represents a profile feature, Γg、κg、σg、Γα、καAnd σαBoth represent tangent parameters.
Optionally, the calibration model training module is further configured to: calculating the profile standard value by the following formula:
Figure BDA0003561029880000045
wherein,
Figure BDA0003561029880000046
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000047
indicating the directional feature similarity.
Optionally, the calibration model training module is further configured to: calculating brightness similarity, contrast similarity and structure similarity between the standard image data and the error image data; and calculating a structural calibration value based on the brightness similarity, the contrast similarity and the structural similarity.
Optionally, the calibration model training module is further configured to: calculating a gray-scale average value mu of the standard image data SSSum gray standard deviation σS(ii) a Calculating a gray-scale average value mu of the error image data EESum gray standard deviation σE(ii) a Calculating a gray-scale covariance σ between the standard image data S and the error image data ESE(ii) a The brightness similarity, the contrast similarity and the structural similarity are calculated by the following formulas:
Figure BDA0003561029880000051
Figure BDA0003561029880000052
Figure BDA0003561029880000053
wherein l (S, E) represents brightness similarity, C (S, E) represents contrast similarity, S (S, E) represents structure similarity, C1、C2And C3Is the bias term.
Optionally, the calibration data generating module is specifically configured to: inputting the image data to be processed into a trained calibration model to generate distance calibration data, offset calibration data and angle calibration data; the distance calibration data, the offset calibration data, and the angle calibration data constitute calibration data of the imaging camera.
Has the advantages that:
(1) acquiring training data, wherein the training data comprises standard image data and error image data; training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data; acquiring image data to be processed through an imaging camera in a light field camera array; inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera; the calibration data generated by the method is efficient and convenient.
(2) And introducing a contour calibration value and a structure calibration value, determining the distance error, the offset error and the angle error, and improving the precision and the universality of the calibration model.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a light field camera calibration method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of training a calibration model according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a light field camera calibration system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a light field camera calibration method, which comprises the following steps: acquiring training data, wherein the training data comprises standard image data and error image data; training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data; acquiring image data to be processed through an imaging camera in a light field camera array; and inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera.
The light field camera calibration method and system can be specifically integrated in electronic equipment, and the electronic equipment can be equipment such as a terminal and a server. The terminal can be a light field camera, a vehicle-mounted camera, a mobile phone, a tablet Computer, an intelligent Bluetooth device, a notebook Computer, or a Personal Computer (PC) and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the light field camera calibration method and system may also be integrated in a plurality of electronic devices, for example, the light field camera calibration method and system may be integrated in a plurality of servers, and the light field camera calibration method and system of the present application may be implemented by the plurality of servers.
It can be understood that the light field camera calibration method and system of the present embodiment may be executed on a terminal, may also be executed on a server, and may also be executed by both the terminal and the server. The above examples should not be construed as limiting the present application.
Fig. 1 shows a schematic flow chart of a light field camera calibration method provided in an embodiment of the present application, please refer to fig. 1, where the light field camera calibration method includes the following steps:
s110, training data are obtained, wherein the training data comprise standard image data and error image data.
Alternatively, if the amount of training data is insufficient, the training data may be increased using a data enhancement method.
And S120, training a calibration model based on the distance error, the offset error and the angle error between the standard image data and the error image data.
The distance error refers to an error distance between the shooting plane and the standard image data and between the shooting plane and the error image data; the offset error refers to that when standard image data is offset, the error image data periodically changes from 0 ° to 360 ° along with the angle of the error direction, and optionally, when calculating, the error is calculated in the horizontal direction and the vertical direction respectively according to the sub-aperture image offset of the light field camera in the horizontal direction and the vertical direction; the angle error is an error angle between a normal vector of the photographing plane and the standard image data.
Based on this, a calibration model is established, comparing the difference between the standard image data and the error image data using the distance error, the offset error, and the angle error.
And S130, acquiring image data to be processed through an imaging camera in the light field camera array.
Wherein the light field camera may be located behind the display, each of the camera modules being arranged to capture images through the display. The light field camera may also not include a display, with the camera module being located in the opposite direction from the remote presentation device controller.
Optionally, an imaging camera is included in the light field camera, which may be positioned such that when the telepresence apparatus is operated, the light field camera captures image data of a portion of the environment surrounding and/or behind the photographer. For example, the imaging camera may span a horizontal distance that is in most cases at least large enough to capture the environment from around the left and/or right side of the photographer.
Alternatively, a panoramic image of the target scene may be captured by an image sensor and the data uploaded to a server using wireless communication techniques.
It should be noted that the image data to be processed may be obtained by a real-time sampling method or an equivalent time sampling method, and is not limited specifically herein.
And S140, inputting the image data to be processed into the trained calibration model to generate calibration data of the imaging camera.
Alternatively, the image data to be processed may be first preprocessed, such as noise reduction, graying processing, and the like, and then the preprocessed image data is input into the trained calibration model.
In one embodiment, step S140 may specifically include the following steps:
and S141, inputting the image data to be processed into a trained calibration model, and generating distance calibration data, offset calibration data and angle calibration data.
And S142, the distance calibration data, the offset calibration data and the angle calibration data form calibration data of the imaging camera.
Therefore, the calibration data can be generated efficiently and conveniently.
Fig. 2 is a schematic flowchart of a calibration model training process provided in the embodiment of the present application, which specifically includes the following steps:
s200, training data are obtained, wherein the training data comprise standard image data and error image data.
S210, acquiring a sub-contour feature g (i, j) and a sub-direction feature alpha (i, j) of each pixel in the standard image data S (N multiplied by M) and the error image data E, wherein 1 < i < N, and 1 < alpha < M.
Further, the contour feature G is calculated by the following formulaSE(i, j) and orientation feature ASE(i,j):
Figure BDA0003561029880000081
Figure BDA0003561029880000091
Wherein, gS(i, j) and gE(i, j) represents the sub-contour features of the standard image data and the error image data, respectively, alphaS(i, j) and αE(i, j) represent the sub-directional features of the standard image data and the error image data, respectively.
S220, calculating the profile characteristic G of the standard image data S relative to the error image data ESE(i, j) and orientation feature ASE(i,j)。
Among them, objective error analysis is generally classified into analysis based on vision, statistical characteristics, and information content. The vision evaluates the image quality by simulating the perception process of the human visual system, and the precision is low; statistical property analysis ignores the correlation between the fused image and the base image; the information content mainly analyzes the information richness of the fused image, and performs gray processing and distribution analysis; therefore, the method and the device introduce the profile characteristics and the direction characteristics to obtain an accurate error analysis result.
In one embodiment, the contour feature similarity and the direction feature similarity may be calculated by the following formulas:
Figure BDA0003561029880000092
Figure BDA0003561029880000093
wherein,
Figure BDA0003561029880000094
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000095
indicates the degree of similarity of directional features, GSE(i, j) represents a profile feature, Γg、κg、σg、Γα、καAnd σαBoth represent tangent parameters.
And S230, calculating a contour standard value based on the relative contour feature similarity and the direction feature similarity.
In one embodiment, the profile standard value can be calculated by the following formula:
Figure BDA0003561029880000096
wherein,
Figure BDA0003561029880000097
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000098
indicating the directional feature similarity.
And S240, calculating a contour calibration value based on the contour standard value.
In one embodiment, the profile calibration values may be calculated by the following formula:
Figure BDA0003561029880000101
wherein QSE(i, j) represents a profile standard value, ωs(i, j) represents QSE(i, j) weight value.
And S250, calculating the gray level average value, the gray level standard deviation and the gray level covariance of the standard image data and the error image data.
Specifically, the gray-scale average value μ of the standard image data S is calculatedSSum gray standard deviation σS(ii) a Calculating a gray-scale average value mu of the error image data EESum gray standard deviation σE(ii) a Calculating a gray-scale covariance σ between the standard image data S and the error image data ESE
And S260, calculating brightness similarity, contrast similarity and structure similarity.
In one embodiment, the brightness similarity, the contrast similarity, and the structural similarity may be calculated by the following formulas:
Figure BDA0003561029880000102
Figure BDA0003561029880000103
Figure BDA0003561029880000104
wherein l (S, E) represents brightness similarity, C (S, E) represents contrast similarity, S (S, E) represents structure similarity, C1、C2And C3Is the bias term.
It can be seen that the better the brightness, contrast and structural similarity, the higher the image quality.
And S270, calculating a structural calibration value based on the brightness similarity, the contrast similarity and the structural similarity.
Specifically, the structural calibration value is calculated by the following formula:
SSIM(S,E)=[l(S,E)]α[c(S,E)]β[s(S,E)]γ
wherein α, β, and γ represent weight parameters, and α + β + γ is 1.
And S280, determining the distance error, the offset error and the angle error based on the contour calibration value and the structure calibration value.
Further, repeating steps S210-S280, training the calibration model.
And determining the functional relationship between various errors and image data generated by the light field camera, namely an error image function, through the error calculation result and multiple times of training.
In the embodiment, the accuracy and universality of the calibration model can be improved by introducing the contour calibration value and the structure calibration value and determining the distance error, the offset error and the angle error.
In order to implement the foregoing method embodiments, this embodiment further provides a light field camera calibration system, and fig. 3 shows a schematic structural diagram of the light field camera calibration system provided in this embodiment of the present application, where the system includes:
a training data obtaining module 310, configured to obtain training data, where the training data includes standard image data and error image data;
a calibration model training module 320 for training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data;
a to-be-processed image acquisition module 330, configured to acquire to-be-processed image data through an imaging camera in the light field camera array;
a calibration data generating module 340, configured to input the image data to be processed into the calibration model, and generate calibration data of the imaging camera.
Optionally, the calibration model training module 320 is specifically configured to: calculating a contour calibration value and a structure calibration value between the standard image data and the error image data; determining the distance error, offset error, and angle error based on the profile calibration value and the structure calibration value.
Optionally, the calibration model training module 320 is further configured to: calculating relative contour feature similarity and direction feature similarity between the standard image data and the error image data; calculating a contour standard value based on the relative contour feature similarity and the direction feature similarity; based on the profile standard values, profile calibration values are calculated.
Optionally, the calibration model training module 320 is further configured to: acquiring a sub-contour feature g (i, j) and a sub-direction feature alpha (i, j) of each pixel in the standard image data S (N multiplied by M) and the error image data E, wherein i is more than 1 and less than N, and alpha is more than 1 and less than M; calculating a contour feature G of the standard image data S relative to the error image data ESE(i, j) and orientation feature ASE(i, j); calculating the similarity of the contour features and the similarity of the direction features by the following formulas:
Figure BDA0003561029880000121
Figure BDA0003561029880000122
wherein,
Figure BDA0003561029880000123
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000124
indicates the degree of similarity of directional features, GSE(i, j) represents a profile feature, Γg、κg、σg、Γα、καAnd σαBoth represent tangent parameters.
Optionally, the calibration model training module 320 is further configured to: calculating the profile standard value by the following formula:
Figure BDA0003561029880000125
wherein,
Figure BDA0003561029880000126
the similarity of the characteristics of the outline is shown,
Figure BDA0003561029880000127
indicating the directional feature similarity.
Optionally, the calibration model training module is further configured to: calculating brightness similarity, contrast similarity and structure similarity between the standard image data and the error image data; and calculating a structural calibration value based on the brightness similarity, the contrast similarity and the structural similarity.
Optionally, the calibration model training module 320 is further configured to: calculating a gray-scale average value mu of the standard image data SSSum gray standard deviation σS(ii) a Calculating a gray-scale average value mu of the error image data EESum gray standard deviation σE(ii) a Calculating a gray-scale covariance σ between the standard image data S and the error image data ESE(ii) a The brightness similarity, the contrast similarity and the structural similarity are calculated by the following formulas:
Figure BDA0003561029880000131
Figure BDA0003561029880000132
Figure BDA0003561029880000133
wherein l (S, E) represents brightness similarity, C (S, E) represents contrast similarity, S (S, E) represents structure similarity, C1、C2And C3Is the bias term.
Optionally, the calibration data generating module 340 is specifically configured to: inputting the image data to be processed into a trained calibration model to generate distance calibration data, offset calibration data and angle calibration data; the distance calibration data, the offset calibration data, and the angle calibration data constitute calibration data for the imaging camera.
The generated calibration data is efficient, convenient and highly accurate and universal.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the modules/units/sub-units/components in the above-described apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
To sum up, the present application provides a method and a system for calibrating a light field camera, where the method includes: acquiring training data, wherein the training data comprises standard image data and error image data; training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data; acquiring image data to be processed through an imaging camera in a light field camera array; and inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera. The calibration data generated by the method is efficient, convenient and highly accurate and universal.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A light field camera calibration method, comprising:
acquiring training data, wherein the training data comprises standard image data and error image data;
training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data;
acquiring image data to be processed through an imaging camera in a light field camera array;
and inputting the image data to be processed into a trained calibration model to generate calibration data of the imaging camera.
2. The method of claim 1, wherein the distance error between the standard image data and the error image data comprises:
calculating a contour calibration value and a structure calibration value between the standard image data and the error image data;
determining the distance error, offset error, and angle error based on the profile calibration value and the structure calibration value.
3. The method of claim 2, wherein said calculating a contour calibration value between said standard image data and said error image data comprises:
calculating relative contour feature similarity and direction feature similarity between the standard image data and the error image data;
calculating a contour standard value based on the relative contour feature similarity and the direction feature similarity;
based on the profile standard values, profile calibration values are calculated.
4. The method of claim 3, wherein the calculating relative contour feature similarity and direction feature similarity between the standard image data and the error image data comprises:
acquiring a sub-contour feature g (i, j) and a sub-direction feature alpha (i, j) of each pixel in the standard image data S (N multiplied by M) and the error image data E, wherein i is more than 1 and less than N, and alpha is more than 1 and less than M;
calculating a profile feature G of the standard image data S relative to the error image data ESE(i, j) and orientation feature ASE(i,j);
Calculating the similarity of the contour features and the similarity of the direction features by the following formulas:
Figure FDA0003561029870000021
Figure FDA0003561029870000022
wherein,
Figure FDA0003561029870000023
the similarity of the characteristics of the outline is shown,
Figure FDA0003561029870000024
indicates the degree of similarity of directional features, GSE(i, j) represents a profile feature, Γg、κg、σg、Γα、καAnd σαBoth represent tangent parameters.
5. The method of claim 4, wherein calculating a contour criterion value based on the relative contour feature similarity and the direction feature similarity comprises:
calculating the profile standard value by the following formula:
Figure FDA0003561029870000025
wherein,
Figure FDA0003561029870000026
the similarity of the characteristics of the outline is shown,
Figure FDA0003561029870000027
indicating the directional feature similarity.
6. The method of claim 2, wherein said calculating a structural calibration value between said standard image data and said error image data comprises:
calculating brightness similarity, contrast similarity and structural similarity between the standard image data and the error image data;
and calculating a structural calibration value based on the brightness similarity, the contrast similarity and the structural similarity.
7. The method of claim 6, wherein the calculating of the brightness similarity, the contrast similarity, and the structural similarity between the standard image data and the error image data comprises:
calculating a gray-scale average value mu of the standard image data SSSum gray standard deviation σS
Calculating a gray-scale average value mu of the error image data EESum gray standard deviation σE
Calculating a gray-scale covariance σ between the standard image data S and the error image data ESE
The brightness similarity, the contrast similarity and the structural similarity are calculated by the following formulas:
Figure FDA0003561029870000031
Figure FDA0003561029870000032
Figure FDA0003561029870000033
wherein l (S, E) represents brightness similarity, C (S, E) represents contrast similarity, S (S, E) represents structure similarity, C1、C2And C3Is the bias term.
8. The method of claim 1, wherein inputting the image data to be processed into a trained calibration model to generate calibration data for the imaging camera comprises:
inputting the image data to be processed into a trained calibration model to generate distance calibration data, offset calibration data and angle calibration data;
the distance calibration data, the offset calibration data, and the angle calibration data constitute calibration data for the imaging camera.
9. A light field camera calibration system, comprising:
the training data acquisition module is used for acquiring training data, wherein the training data comprises standard image data and error image data;
a calibration model training module for training a calibration model based on a distance error, an offset error, and an angle error between the standard image data and the error image data;
the image to be processed acquisition module is used for acquiring image data to be processed through an imaging camera in the light field camera array;
and the calibration data generation module is used for inputting the image data to be processed into the calibration model and generating the calibration data of the imaging camera.
10. The system of claim 9, wherein the distance error between the standard image data and the error image data comprises:
calculating a contour calibration value and a structure calibration value between the standard image data and the error image data;
determining the distance error, offset error, and angle error based on the profile calibration value and the structure calibration value.
CN202210293143.0A 2022-03-23 2022-03-23 Light field camera calibration method and system Pending CN114666573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210293143.0A CN114666573A (en) 2022-03-23 2022-03-23 Light field camera calibration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210293143.0A CN114666573A (en) 2022-03-23 2022-03-23 Light field camera calibration method and system

Publications (1)

Publication Number Publication Date
CN114666573A true CN114666573A (en) 2022-06-24

Family

ID=82031221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210293143.0A Pending CN114666573A (en) 2022-03-23 2022-03-23 Light field camera calibration method and system

Country Status (1)

Country Link
CN (1) CN114666573A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009693A (en) * 2019-04-01 2019-07-12 清华大学深圳研究生院 A kind of Fast Blind scaling method of light-field camera
CN111340888A (en) * 2019-12-23 2020-06-26 首都师范大学 Light field camera calibration method and system without white image
CN111351446A (en) * 2020-01-10 2020-06-30 奕目(上海)科技有限公司 Light field camera calibration method for three-dimensional topography measurement
US20210021785A1 (en) * 2019-07-18 2021-01-21 Microsoft Technology Licensing, Llc Light field camera modules and light field camera module arrays
CN112381894A (en) * 2021-01-15 2021-02-19 清华大学 Adaptive light field imaging calibration method, device and storage medium
CN114145011A (en) * 2019-07-18 2022-03-04 微软技术许可有限责任公司 Dynamic detection and correction of light field camera array miscalibration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009693A (en) * 2019-04-01 2019-07-12 清华大学深圳研究生院 A kind of Fast Blind scaling method of light-field camera
US20210021785A1 (en) * 2019-07-18 2021-01-21 Microsoft Technology Licensing, Llc Light field camera modules and light field camera module arrays
CN114145011A (en) * 2019-07-18 2022-03-04 微软技术许可有限责任公司 Dynamic detection and correction of light field camera array miscalibration
CN111340888A (en) * 2019-12-23 2020-06-26 首都师范大学 Light field camera calibration method and system without white image
CN111351446A (en) * 2020-01-10 2020-06-30 奕目(上海)科技有限公司 Light field camera calibration method for three-dimensional topography measurement
CN112381894A (en) * 2021-01-15 2021-02-19 清华大学 Adaptive light field imaging calibration method, device and storage medium

Similar Documents

Publication Publication Date Title
EP3496383A1 (en) Image processing method, apparatus and device
CN110909693B (en) 3D face living body detection method, device, computer equipment and storage medium
CN105740775B (en) Three-dimensional face living body identification method and device
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
CN108230293A (en) Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
CN111401324A (en) Image quality evaluation method, device, storage medium and electronic equipment
EP1815361A1 (en) Variance-based event clustering
TW201118791A (en) System and method for obtaining camera parameters from a plurality of images, and computer program products thereof
WO2018210308A1 (en) Blurring method and apparatus for image, storage medium, and electronic device
US10529081B2 (en) Depth image processing method and depth image processing system
EP3093822B1 (en) Displaying a target object imaged in a moving picture
CN114841862B (en) Image splicing method and system based on hundred million pixel array type camera
CN112508887A (en) Image definition evaluation method, system, storage medium, equipment and application
CN113421242B (en) Welding spot appearance quality detection method and device based on deep learning and terminal
JP7312026B2 (en) Image processing device, image processing method and program
US20220392246A1 (en) Posture evaluating apparatus, method and system
CN112132925A (en) Method and device for reconstructing underwater image color
US20060204091A1 (en) System and method for analyzing and processing two-dimensional images
CN108427110A (en) Distance measuring method, device and electronic equipment
CN108876845B (en) Fresnel pattern center determining method and device
CN116597246A (en) Model training method, target detection method, electronic device and storage medium
CN116958795A (en) Method and device for identifying flip image, electronic equipment and storage medium
CN114666573A (en) Light field camera calibration method and system
CN112396117B (en) Image detection method and device and electronic equipment
CN113840135A (en) Color cast detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220624