CN106296661A - A kind of demarcation preprocess method being applicable to light-field camera - Google Patents

A kind of demarcation preprocess method being applicable to light-field camera Download PDF

Info

Publication number
CN106296661A
CN106296661A CN201610613216.4A CN201610613216A CN106296661A CN 106296661 A CN106296661 A CN 106296661A CN 201610613216 A CN201610613216 A CN 201610613216A CN 106296661 A CN106296661 A CN 106296661A
Authority
CN
China
Prior art keywords
lenticule
subimage
point
light
field camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610613216.4A
Other languages
Chinese (zh)
Other versions
CN106296661B (en
Inventor
王好谦
吴驹东
王兴政
张永兵
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weilai Media Technology Research Institute
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Weilai Media Technology Research Institute
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weilai Media Technology Research Institute, Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Weilai Media Technology Research Institute
Priority to CN201610613216.4A priority Critical patent/CN106296661B/en
Publication of CN106296661A publication Critical patent/CN106296661A/en
Priority to PCT/CN2017/083303 priority patent/WO2018018987A1/en
Application granted granted Critical
Publication of CN106296661B publication Critical patent/CN106296661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The present invention discloses a kind of demarcation preprocess method being applicable to light-field camera, and described method comprises the steps: S1: white Image semantic classification, obtains the subimage under each lenticule;S2: a given gridiron pattern image, by pretreated to gridiron pattern image and step S1 white image phase and, it is thus achieved that the subimage under appointment lenticule;S3: each subimage obtaining step S2 carries out Corner Detection.In white image procossing, determine lenticular center position first with halation method, then pass through the means lenticular center position of optimization such as linear fit, subimage focus point, surrounding lenticule central point.In Corner Detection, use the detection of Hough line, then calculate the position of angle point, then by a self-defining index continuous iteration optimization angle point, obtain more accurate coordinate position.

Description

A kind of demarcation preprocess method being applicable to light-field camera
Technical field
The invention belongs to camera calibration field, especially relate to a kind of pretreatment side in light-field camera calibration process Method.
Background technology
In recent years, camera family welcomes emerging member's hand-held light-field camera.From principle, it is overturned Traditional camera gather light mode, traditional camera to the directive smooth line integral of certain point in scene, and light-field camera (under What the light-field camera occurred in literary composition referred to is exactly hand-held light-field camera) to the light line integral of all directions of certain point in scene. Functionally, it changes the shooting style of existing traditional camera, and traditional camera is usually first to focus on and takes pictures, and light-field camera Can first take pictures refocusing.From application prospect, it can combine with burning the hotest virtual reality, augmented reality, Hardware supported is provided for them.
Although light-field camera has lot of advantages, but it has a demarcation side the most ripe unlike traditional camera Method.The method the most much demarcated about light-field camera is both for Lytro camera, such as at CVPR, (calculating regarded in 2013 Feel with pattern recognition meeting) on deliver based on 4D ligh field model, between real scene and the data of sensor record set up The scaling method of contact, Cho on ICCV proposed a kind of data preprocessing method from frequency domain angle analysis, afterwards the same year Bok also been proposed use line features localization.The above research is both for light-field camera 1.0 and launches, and light-field camera 1.0 refers to Sensor is to the distance of microlens array equal to the light-field camera of lenticule focal length, and light-field camera 2.0 refers to sensor and arrives The distance of microlens array is not equal to the light-field camera of lenticule focal length.Due to the son under each lenticule of light-field camera 1.0 Image is the integration of all directions of certain point in scene, only describes angle information, so its subimage is almost seen Do not go out the texture information of real scene;The subimage of contrary light-field camera 2.0 moderately decreases angle information, adds simultaneously Positional information.The difference of this principle, result in used some in scaling method proposed for light-field camera 1.0 Preprocessing means is not suitable for light-field camera 2.0, it is therefore necessary to propose light-field camera 2.0 a kind of practical and effective Demarcate preprocess method.
Light-field camera scaling method pretreatment mainly has two steps.The first step, dialogue image processes, and obtains micro- Lens centre point, and dialogue image carries out rotating and balance correction.The main flow means obtaining lenticule central point at present are light Dizzy method.Second step, Corner Detection.The image-forming principle of light-field camera 2.0 makes X-comers detection be possibly realized, but subgraph As containing angle information, making again it be different from traditional camera checkerboard angle point detection process.
Summary of the invention
The purpose of the present invention is to propose to a kind of demarcation preprocess method being applicable to light-field camera, solve for light-field camera In 1.0 scaling methods proposed, some used preprocessing means are not suitable for the problem of calibrating of light-field camera 2.0.
For reaching above-mentioned purpose, present invention employs techniques below scheme:
A kind of demarcation preprocess method being applicable to light-field camera, it is characterised in that comprise the steps:
S1: white Image semantic classification, obtains the subimage under each lenticule;
S2: a given gridiron pattern image, by pretreated to gridiron pattern image and step S1 white image phase and, it is thus achieved that refer to Determine the subimage under lenticule;
S3: each subimage obtaining step S2 carries out Corner Detection.
Preferably, step S1 comprises the steps:
S11: dialogue image carries out disk filtering;
S12: utilize the method asking for maximum in morphology to search out all of Local modulus maxima in white image, the most micro- Lens centre point;
S13: the lenticule central point finding step S12 does the linear fit of horizontal and vertical, corrects those not directly Central point on line;
S14: utilize the method for Morphological Reconstruction to reconstruct the subimage under each lenticule further, and ask for this subgraph The center of gravity of picture, goes to correct lenticular center position with focus point.
It is further preferred that step S14 comprises the steps:
S141: iris out the subimage under a lenticule first with a square boxes;Then morphology weight is used The method of structure reconstructs the subimage under lenticule;
S142: find the profile of the subimage that step S141 obtains, the profile got is estimated judgement, chooses rule Then such as following formula:
C*=argminC∈I|C-π·Distcenters|
Wherein I is the set of all profiles found;
Utilize optimum profiles C*Calculate the position of centre of gravity of subimage, and go to revise lenticule center of gravity with this focus point Point;
S143: calculate the central point distance between each lenticule and surrounding lenticule, be designated as { Di,j, i represents that this is micro- The serial number of mirror, j represents lenticular serial number around this lenticule;Calculate { D againi,jStandard deviation Stdevi,jIf, This standard deviation is more than threshold value thre1, then using this lenticule as candidate's exception lenticule;Remove { Di,jIn }, maximum is Little value, calculates standard deviation Stdev ' againi,jIf standard deviation is still greater than threshold value thre2, then this lenticule labelling For abnormal lenticule;
S144: calculate the average of lenticule central point aroundThis average is utilized to correct lenticule further Center position.
Preferably, step S3 comprises the steps:
S31: the subimage under each lenticule is done line detection;
S32: the straight line detected is classified, and ask for the intersection point of straight line as candidate angular;
S33: weed out angle point close together in candidate angular;
S34: diagonal angle point is optimized.
It is further preferred that step S31 comprises the steps:
S311: subimage does canny rim detection, removes lenticule edge so that the edge in subimage is only chess The edge of dish lattice;
S312: use Hough line to detect all straight lines in subimage.
It is further preferred that step S32 comprises the steps:
S321: according to the inclination angle of straight line, straight line is divided into two classes, is designated as { LA,LB, more each class straight line is carried out density Cluster, result is designated as { LAi},{LBj}.Each class straight line is obtained the average of straight line parameter, is designated as
S322: calculate two classes respectively from { LAiAnd { LBjStraight line between intersection point, computing formula is as follows:
x = r 1 sinθ 2 - r 2 sinθ 1 Δ
y = r 2 cosθ 1 - r 1 cosθ 2 Δ
Wherein Δ=cos θ1sinθ2-cosθ2sinθ1
It is further preferred that the criterion that step S33 weeds out angle point close together in candidate angular is: if a friendship The distance of some a to intersection point b less than a threshold value, then all deletes intersection point as candidate using the two intersection point;If intersection point a arrives other The distance of intersection point is both greater than this threshold value, then a is deleted intersection point from candidate and remove, otherwise delete intersection point a, antinode b and carry out Same operation.
It is further preferred that step S34 comprises the steps:
S341: seek First-order Gradient along square frame, and First-order Gradient value is sued for peace, be designated asIt is referred to as First-order Gradient With, then calculate perimeter p eri of square frameq;Wherein q represents square frame to be calculated, gradiRepresent the i+1 along square frame Pixel and the difference of ith pixel;
S342: angle point is defined as:
c o r n e r = argmax q Σ q grad i peri q ;
S343: constantly iteration reduces square frame perimeter p eriq, just can improve the degree of accuracy of angle point.
It is further preferred that described First-order Gradient and perimeter p eri with described square frameqUnrelated, the most relevant with position, Comprise First-order Gradient and the maximum of the square frame of angle point, it may be assumed that
dgradi=∑agradi>∑bgradi>∑cgradi
Advantages of the present invention and providing the benefit that:
1, a kind of method that lenticule center position optimizes is proposed;
2, the lenticule of breakage can be marked well, there is stronger robustness;
3, propose and do the Corner Detection of light-field camera 2.0 by the method for line detection;
4, the angle point that a kind of arithmetic operators optimization being similar to curl detects is proposed.
Accompanying drawing explanation
Fig. 1 is the basic flow sheet figure of the present invention;
Fig. 2 is the lenticule central point schematic diagram that the embodiment of the present invention utilizes subimage local maximum to mark;
Fig. 3 is the lenticule central point schematic diagram after the embodiment of the present invention utilizes linear fit to correct;
Fig. 4 is embodiment of the present invention lenticule central point schematic diagram after center of gravity optimizes;
Fig. 5 is that the embodiment of the present invention utilizes a square boxes to iris out the subimage schematic diagram under a lenticule;
Fig. 6 is that the embodiment of the present invention utilizes Morphological Reconstruction to isolate the subimage schematic diagram under single lenticule;
Fig. 7 is that embodiment of the present invention angle point optimizes schematic diagram.
Detailed description of the invention
Below in conjunction with detailed description of the invention and compare accompanying drawing the present invention is described in further detail.It is emphasized that That the description below is merely exemplary rather than in order to limit the scope of the present invention and application thereof.
Refer to Fig. 1.A kind of demarcation preprocess method being applicable to light-field camera that the present invention proposes, including walking as follows Rapid:
S1: white Image semantic classification, obtains the subimage under each lenticule.
First dialogue image carries out disk filtering, then utilizes the method asking for maximum in morphology to search out white image In all of Local modulus maxima, as shown in Figure 2, density bullet point is expressed as lenticular central point.As shown in Figure 2, position In same a line or with string central point the most point-blank, this is not met by linear array with microlens array.Therefore, Needing these central points by row with by arranging the fitting a straight line carrying out horizontal and vertical respectively, the result of matching is as shown in Figure 3. Can also calculate simultaneously and close on two lenticule central point approximate distance, be designated as Distcenters
Halation method requires that incident ray vertical incidence thin lens, bright spot are only the central point of thin lens.But remove The lenticule of close main lens optical axis, remaining lenticular incident ray is also unsatisfactory for this it is assumed that cause bright spot and micro- Lens centre point has the deviation of two or three pixels.Now go to estimate that lenticule central point is not the most appropriate by bright spot, it is necessary to Utilize other means position optimization to central point.Its step is as follows:
1) square boxes is utilized to iris out the subimage under a lenticule, as shown in Figure 5, due in this square frame Further comprises some information of other subimages, use the method for Morphological Reconstruction to reconstruct to the subgraph under lenticule herein Picture, as shown in Figure 6.
2) find the profile of subimage, the profile got is estimated judgement, selection rule such as following formula:
C*=argminC∈I|C-π·Distcenters| (1)
Wherein I is the set of all profiles found.Utilize optimum profiles C*Calculate the position of centre of gravity of subimage, and use This focus point goes to revise lenticule focus point.
3) calculate the central point distance between each lenticule and surrounding lenticule, be designated as { Di,j, i represents this lenticule Serial number, j represents lenticular serial number around this lenticule.Calculate { D againi,jStandard deviation Stdevi,jIf, should Standard deviation is more than threshold value thre1, then using this lenticule as candidate's exception lenticule.Remove { Di,jMinimax in } Value, calculates standard deviation Stdev ' againi,jIf standard deviation still greater than threshold value thre2, is then labeled as this lenticule Abnormal lenticule.
4) average of lenticule central point around is calculatedThis average is utilized to correct further lenticular Center position.
Eventually pass the central point after optimization as shown in Figure 4.
S2: a given gridiron pattern image, by pretreated to gridiron pattern image and step S1 white image phase and, it is thus achieved that refer to Determine the subimage under lenticule.
S3: each subimage obtaining step S2 carries out Corner Detection.
By white image processing step, the subimage under each lenticule can be obtained.A given gridiron pattern image, can With utilize gridiron pattern image and white image phase with, it is thus achieved that specify the subimage under lenticule.Then each subimage is carried out Process, including following link:
1) subimage is done canny rim detection, remove lenticule edge so that the edge in subimage is only gridiron pattern Edge.
2) use Hough line to detect all straight lines in subimage, then according to the inclination angle of straight line, straight line is divided It is two classes, is designated as { LA,LB, more each class straight line is carried out Density Clustering, result is designated as { LAi},{LBj}.To each class straight line Obtain the average of straight line parameter, be designated as
3) two classes are calculated respectively from { LAiAnd { LBjStraight line between intersection point, computing formula is as follows:
x = r 1 sinθ 2 - r 2 sinθ 1 Δ
y = r 2 cosθ 1 - r 1 cosθ 2 Δ - - - ( 2 )
Wherein Δ=cos θ1sinθ2-cosθ2sinθ1.Then these intersection points are screened, it is judged that criterion is, if a friendship The distance of some a to intersection point b less than a threshold value, then all deletes intersection point as candidate using the two intersection point, if intersection point a arrives other The distance of intersection point is both greater than this threshold value, then a is deleted intersection point from candidate and remove, otherwise delete intersection point a, antinode b and carry out Same operation.
4) angle point optimization.As shown in Figure 7, seek First-order Gradient along square frame, and First-order Gradient value is sued for peace, be designated asBe referred to as First-order Gradient and, then calculate perimeter p eri of square frameq.Wherein q represents square frame to be calculated, gradi Represent the difference of i+1 the pixel along square frame and ith pixel.Understand:
dgradi=∑agradi>∑bgradi>∑cgradi
Above formula illustrates under certain condition, First-order Gradient and perimeter p eri with square frameqUnrelated, the most relevant with position, bag The First-order Gradient of the square frame containing angle point and maximum.Angle point is defined as:
c o r n e r = argmax q Σ q grad i peri q - - - ( 3 )
It follows that only need continuous iteration to reduce square frame perimeter p eriqJust can improve the degree of accuracy of angle point.
Above content is to combine concrete/the most made for the present invention further description, it is impossible to Assert the present invention be embodied as be confined to these explanations.General technical staff of the technical field of the invention is come Saying, without departing from the inventive concept of the premise, these embodiments having described that can also be made some replacements or modification by it, And these substitute or variant all should be considered as belonging to protection scope of the present invention.

Claims (9)

1. the demarcation preprocess method being applicable to light-field camera, it is characterised in that comprise the steps:
S1: white Image semantic classification, obtains the subimage under each lenticule;
S2: a given gridiron pattern image, by pretreated to gridiron pattern image and step S1 white image phase and, it is thus achieved that specify micro- Subimage under lens;
S3: each subimage obtaining step S2 carries out Corner Detection.
The demarcation preprocess method being applicable to light-field camera the most according to claim 1, it is characterised in that step S1 includes Following steps:
S11: dialogue image carries out disk filtering;
S12: utilize the method asking for maximum in morphology to search out all of Local modulus maxima, i.e. lenticule in white image Central point;
S13: the lenticule central point finding step S12 does the linear fit of horizontal and vertical, corrects those not on straight line Central point;
S14: utilize the method for Morphological Reconstruction to reconstruct the subimage under each lenticule further, and ask for this subimage Center of gravity, goes to correct lenticular center position with focus point.
The demarcation preprocess method being applicable to light-field camera the most according to claim 2, it is characterised in that step S14 bag Include following steps:
S141: iris out the subimage under a lenticule first with a square boxes;Then Morphological Reconstruction is used Method reconstructs the subimage under lenticule;
S142: finding the profile of the subimage that step S141 obtains, the profile got is estimated judgement, selection rule is such as Following formula:
C*=argminC∈I|C-π·Distcenters|
Wherein I is the set of all profiles found;
Utilize optimum profiles C*Calculate the position of centre of gravity of subimage, and go to revise lenticule focus point with this focus point;
S143: calculate the central point distance between each lenticule and surrounding lenticule, be designated as { Di,j, i represents that this is lenticular Serial number, j represents lenticular serial number around this lenticule;Calculate { D againi,jStandard deviation Stdevi,jIf, this mark Quasi-difference is more than threshold value thre1, then using this lenticule as candidate's exception lenticule;Remove { Di,jMaximin in }, Again calculate standard deviation Stdevi 'I, jIf standard deviation still greater than threshold value thre2, is then labeled as this lenticule different Often lenticule;
S144: calculate the average of lenticule central point aroundUtilize this average correct further lenticular in Heart point position.
The demarcation preprocess method being applicable to light-field camera the most according to claim 1, it is characterised in that step S3 includes Following steps:
S31: the subimage under each lenticule is done line detection;
S32: the straight line detected is classified, and ask for the intersection point of straight line as candidate angular;
S33: weed out angle point close together in candidate angular;
S34: diagonal angle point is optimized.
The demarcation preprocess method being applicable to light-field camera the most according to claim 4, it is characterised in that step S31 bag Include following steps:
S311: subimage does canny rim detection, removes lenticule edge so that the edge in subimage is only gridiron pattern Edge;
S312: use Hough line to detect all straight lines in subimage.
The demarcation preprocess method being applicable to light-field camera the most according to claim 4, it is characterised in that step S32 bag Include following steps:
S321: according to the inclination angle of straight line, straight line is divided into two classes, is designated as { LA,LB, then it is poly-that each class straight line carries out density Class, result is designated as { LAi},{LBj};Each class straight line is obtained the average of straight line parameter, is designated as
S322: calculate two classes respectively from { LAI} and { LBIntersection point between the straight line of j}, computing formula is as follows:
x = r 1 sinθ 2 - r 2 sinθ 1 Δ
y = r 2 cosθ 1 - r 1 cosθ 2 Δ
Wherein Δ=cos θ1sinθ2-cosθ2sinθ1
The demarcation preprocess method being applicable to light-field camera the most according to claim 4, it is characterised in that step S33 is picked The criterion removing angle point close together in candidate angular is: if the distance of an intersection point a to intersection point b is less than a threshold value, Then the two intersection point is all deleted intersection point as candidate;If intersection point a is both greater than this threshold value to the distance of other intersection points, then A deletes intersection point from candidate and removes, and otherwise deletes intersection point a, antinode b and operates equally.
The demarcation preprocess method being applicable to light-field camera the most according to claim 4, it is characterised in that step S34 bag Include following steps:
S341: seek First-order Gradient along square frame, and First-order Gradient value is sued for peace, be designated asBe referred to as First-order Gradient and, so After calculate perimeter p eri of square frameq;Wherein q represents square frame to be calculated, gradiRepresent along square frame i+1 pixel and The difference of ith pixel;
S342: angle point is defined as:
c o r n e r = argmax q Σ q grad i peri q ;
S343: constantly iteration reduces square frame perimeter p eriq, just can improve the degree of accuracy of angle point.
The demarcation preprocess method being applicable to light-field camera the most according to claim 8, it is characterised in that a described ladder Degree and perimeter p eri with described square frameqUnrelated, the most relevant with position, comprise First-order Gradient and the maximum of the square frame of angle point, it may be assumed that
dgradi=∑agradi>∑bgradi>∑cgradi
CN201610613216.4A 2016-07-29 2016-07-29 A kind of calibration preprocess method suitable for light-field camera Active CN106296661B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610613216.4A CN106296661B (en) 2016-07-29 2016-07-29 A kind of calibration preprocess method suitable for light-field camera
PCT/CN2017/083303 WO2018018987A1 (en) 2016-07-29 2017-05-05 Calibration pre-processing method for light field camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610613216.4A CN106296661B (en) 2016-07-29 2016-07-29 A kind of calibration preprocess method suitable for light-field camera

Publications (2)

Publication Number Publication Date
CN106296661A true CN106296661A (en) 2017-01-04
CN106296661B CN106296661B (en) 2019-06-28

Family

ID=57663160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610613216.4A Active CN106296661B (en) 2016-07-29 2016-07-29 A kind of calibration preprocess method suitable for light-field camera

Country Status (2)

Country Link
CN (1) CN106296661B (en)
WO (1) WO2018018987A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610182A (en) * 2017-09-22 2018-01-19 哈尔滨工业大学 A kind of scaling method at light-field camera microlens array center
WO2018018987A1 (en) * 2016-07-29 2018-02-01 深圳市未来媒体技术研究院 Calibration pre-processing method for light field camera
CN108305233A (en) * 2018-03-06 2018-07-20 哈尔滨工业大学 A kind of light field image bearing calibration for microlens array error
CN108426585A (en) * 2018-03-12 2018-08-21 哈尔滨工业大学 A kind of geometric calibration method of light-field camera
CN109801300A (en) * 2017-11-16 2019-05-24 北京百度网讯科技有限公司 Coordinate extraction method, device, equipment and the computer readable storage medium of X-comers
CN109859226A (en) * 2019-01-10 2019-06-07 上海理工大学 A kind of detection method of the X-comers sub-pix of figure segmentation
CN111179353A (en) * 2019-12-17 2020-05-19 清华大学深圳国际研究生院 Micro-lens array calibration method and system of light field camera
CN111340888A (en) * 2019-12-23 2020-06-26 首都师范大学 Light field camera calibration method and system without white image

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445491B (en) * 2020-03-24 2023-09-15 山东智翼航空科技有限公司 Three-neighborhood maximum difference edge detection narrow channel guiding method for miniature unmanned aerial vehicle
CN111553927B (en) * 2020-04-24 2023-05-16 厦门云感科技有限公司 Checkerboard corner detection method, detection system, computer device and storage medium
CN111710005B (en) * 2020-06-18 2023-03-31 齐鲁工业大学 Grid type thermal infrared camera calibration plate and calibration method
CN112489065B (en) * 2020-11-27 2023-07-07 广东奥普特科技股份有限公司 Chessboard standard point sub-pixel extraction method
CN112614146B (en) * 2020-12-21 2022-05-13 广东奥普特科技股份有限公司 Method and device for judging chessboard calibration corner points and computer readable storage medium
CN113393538B (en) * 2021-07-14 2024-05-28 南京苏润科技发展有限公司 Gate crack detection method based on double-checkerboard calibration
CN113643280B (en) * 2021-08-30 2023-09-22 燕山大学 Computer vision-based plate sorting system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026910A1 (en) * 2009-07-28 2011-02-03 National Taiwan University Photometric calibration method and device
US20160029017A1 (en) * 2012-02-28 2016-01-28 Lytro, Inc. Calibration of light-field camera geometry via robust fitting
CN105374044A (en) * 2015-12-04 2016-03-02 中国科学院光电技术研究所 Automatic calibration method of light field camera
CN105488810A (en) * 2016-01-20 2016-04-13 东南大学 Focused light field camera internal and external parameter calibration method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296661B (en) * 2016-07-29 2019-06-28 深圳市未来媒体技术研究院 A kind of calibration preprocess method suitable for light-field camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026910A1 (en) * 2009-07-28 2011-02-03 National Taiwan University Photometric calibration method and device
US20160029017A1 (en) * 2012-02-28 2016-01-28 Lytro, Inc. Calibration of light-field camera geometry via robust fitting
CN105374044A (en) * 2015-12-04 2016-03-02 中国科学院光电技术研究所 Automatic calibration method of light field camera
CN105488810A (en) * 2016-01-20 2016-04-13 东南大学 Focused light field camera internal and external parameter calibration method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DONALD G. DANSEREAU 等: "Decoding, Calibration and Rectification for Lenselet-based Plenoptic Cameras", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
TUANCHAO DU 等: "Compressive photography based on lens array with coded mask", 《OPTOELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY》 *
池上好风: "使用棋盘格实现摄像头序号标定", 《CSDN:HTTPS://BLOG.CSDN.NET/LINCZONE/ARTICLE/DETAILS/48314871#COMMENTBOX》 *
蔡振江 等: "采用Hough变换和灰度变化的图像角点检测法", 《北京理工大学学报》 *
高珍: "光场相机的数据处理和标定方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018018987A1 (en) * 2016-07-29 2018-02-01 深圳市未来媒体技术研究院 Calibration pre-processing method for light field camera
CN107610182A (en) * 2017-09-22 2018-01-19 哈尔滨工业大学 A kind of scaling method at light-field camera microlens array center
CN107610182B (en) * 2017-09-22 2018-09-11 哈尔滨工业大学 A kind of scaling method at light-field camera microlens array center
CN109801300A (en) * 2017-11-16 2019-05-24 北京百度网讯科技有限公司 Coordinate extraction method, device, equipment and the computer readable storage medium of X-comers
CN108305233A (en) * 2018-03-06 2018-07-20 哈尔滨工业大学 A kind of light field image bearing calibration for microlens array error
CN108426585A (en) * 2018-03-12 2018-08-21 哈尔滨工业大学 A kind of geometric calibration method of light-field camera
CN108426585B (en) * 2018-03-12 2019-09-13 哈尔滨工业大学 A kind of geometric calibration method of light-field camera
CN109859226A (en) * 2019-01-10 2019-06-07 上海理工大学 A kind of detection method of the X-comers sub-pix of figure segmentation
CN109859226B (en) * 2019-01-10 2022-06-17 上海理工大学 Detection method of checkerboard corner sub-pixels for graph segmentation
CN111179353A (en) * 2019-12-17 2020-05-19 清华大学深圳国际研究生院 Micro-lens array calibration method and system of light field camera
CN111340888A (en) * 2019-12-23 2020-06-26 首都师范大学 Light field camera calibration method and system without white image

Also Published As

Publication number Publication date
CN106296661B (en) 2019-06-28
WO2018018987A1 (en) 2018-02-01

Similar Documents

Publication Publication Date Title
CN106296661A (en) A kind of demarcation preprocess method being applicable to light-field camera
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
Boltes et al. Automatic extraction of pedestrian trajectories from video recordings
CN109269405B (en) A kind of quick 3D measurement and comparison method
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
WO2020125499A1 (en) Operation prompting method and glasses
US8755607B2 (en) Method of normalizing a digital image of an iris of an eye
CN101276499B (en) Intelligent monitoring apparatus of ATM equipment based on all-directional computer vision
BR112019006165B1 (en) PROCESS AND DEVICE FOR DETERMINING A REPRESENTATION OF A GLASSES LENS EDGE
CA3152812A1 (en) Facial recognition method and apparatus
JP6453488B2 (en) Statistical method and apparatus for passersby based on identification of human head top
CN106258010A (en) 2D image dissector
CN106570491A (en) Robot intelligent interaction method and intelligent robot
JP2009055139A (en) Person tracking system, apparatus, and program
CN105787876B (en) One kind being based on the matched panoramic video method for automatically split-jointing of SURF signature tracking
JP2009157767A (en) Face image recognition apparatus, face image recognition method, face image recognition program, and recording medium recording this program
CN105956536A (en) Pretreatment method and device for iris recognition
CN113129339B (en) Target tracking method and device, electronic equipment and storage medium
CN112819895A (en) Camera calibration method and device
CN107862240A (en) A kind of face tracking methods of multi-cam collaboration
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN106537217A (en) Wide field-of-view depth imaging
CN104596442B (en) A kind of device and method of assist three-dimensional scanning
Flores et al. Camera distance from face images
TW202242803A (en) Positioning method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant