CN109059800B - Light source position calibration method of three-dimensional reconstruction device - Google Patents

Light source position calibration method of three-dimensional reconstruction device Download PDF

Info

Publication number
CN109059800B
CN109059800B CN201810507975.1A CN201810507975A CN109059800B CN 109059800 B CN109059800 B CN 109059800B CN 201810507975 A CN201810507975 A CN 201810507975A CN 109059800 B CN109059800 B CN 109059800B
Authority
CN
China
Prior art keywords
plane
light source
camera
image
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810507975.1A
Other languages
Chinese (zh)
Other versions
CN109059800A (en
Inventor
宗诗皓
翟理想
赵雅丛
樊兆雯
仲雪飞
张�雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810507975.1A priority Critical patent/CN109059800B/en
Publication of CN109059800A publication Critical patent/CN109059800A/en
Application granted granted Critical
Publication of CN109059800B publication Critical patent/CN109059800B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/02Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
    • G01B21/04Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
    • G01B21/042Calibration or calibration artifacts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a light source position calibration method of a three-dimensional reconstruction device, which comprises a light source group, a camera, a controller, a plane template and an image processor, wherein the light source position calibration of the three-dimensional reconstruction device comprises three links.

Description

Light source position calibration method of three-dimensional reconstruction device
Technical Field
The invention relates to the technical field of three-dimensional reconstruction correlation, in particular to a light source position calibration method of a three-dimensional reconstruction device.
Background
Nowadays, the three-dimensional reconstruction device is widely applied to production practice and daily life. At present, a common three-dimensional reconstruction method is to estimate the normal vector of the surface of an object through the gray level difference of the object under different illumination conditions, and has the advantages of outstanding recovery capability for texture details, high calibration requirement for light source parameters, and influence of the installation error of a light source and a camera on the reduction accuracy, so that a light source calibration method which uses the influence of the installation error of the light source and the camera on the reduction accuracy needs to be designed.
Disclosure of Invention
The technical problem is as follows: in order to solve the above problems, the present invention provides a method for calibrating a light source position of a three-dimensional reconstruction apparatus, which includes a light source set, a camera, a controller, a planar template and an image processor, and can reduce the influence of the installation error of the light source and the camera on the reconstruction accuracy,
the technical scheme is as follows: the invention relates to a light source position calibration method of a three-dimensional reconstruction device, wherein the three-dimensional reconstruction device comprises a light source group, a camera, a controller, a plane template and an image processor, wherein the camera is positioned in the middle, the light source group comprises a first light source, a second light source, a third light source and a fourth light source which are respectively positioned on 4 sides of the camera, and the plane template is positioned above the camera and the light source group;
the light source position calibration method of the three-dimensional reconstruction device comprises a shooting link, a preparation link and an operation link; in the preparation link, the shot plane image is grayed, and the offset coordinate when the error coefficient obtained in the operation link takes the minimum value is recorded as the optimum coordinate; if the detection step length is larger than 2 pixels after the operation link is finished, deleting the lattice points in the detection grid, the lattice point distance of which is superposed with the optimal coordinate and is larger than the detection step length; halving the detection step length and adding new lattice points in the detection grid; after the grid points are added to the detection grid, the distance between adjacent grid points is a detection step length, and the area of the detection grid is not increased; then repeating the operation link; and after the operation link is finished, if the detection step length is less than 2 pixels, recording the optimal coordinate as an optimal offset coordinate.
Wherein:
the shooting link is as follows: the controller controls all light sources in the light source group to be sequentially lightened, and immediately controls the camera to shoot the plane template placed in front of the light source group and the camera after each light source is lightened, and a group of pictures obtained are called as a first plane image.
The preparation link establishes a three-dimensional rectangular coordinate system by taking the plane of the first plane image as an xy plane and the upper left corner of the first plane image as an origin; setting detection step length in the image processor, establishing a detection grid parallel to an xy plane along the directions of x and y axes, wherein the distance between adjacent grid points is the detection step length, the z axis of the three-dimensional rectangular coordinate system passes through one of the grid points, the position of a camera is coplanar with the detection grid, and the projection of the detection grid to the z =0 plane is totally positioned in the first plane image.
In the operation link, the image processor sets the (x, y) coordinates of the offset coordinates as the (x, y) coordinates of each grid point in the detection grid in sequence, and sets the z coordinates of the offset coordinates as the distance between the camera and the plane template; carrying out three-dimensional reconstruction on the plane template according to a three-dimensional reconstruction algorithm built in the image processor and the first plane image to obtain a first plane model; vertically projecting the first plane model to an x-y plane, and uniformly dividing a projection area into error grids with the size of m x n along x and y axes; the collection of points in the first plane model, which have the same (x, y) coordinates as the error grid points, is recorded as a first point cloud;
in the operation link, the image processing unit fits the first point cloud into a plane in the three-dimensional rectangular coordinate system, and the plane is called as a second plane; calculating the square sum of the distances between all the points in the first point cloud and the second plane, and the square sum is called as an error coefficient; and comparing error coefficients obtained when the offset coordinates are positioned at each grid point in the detection grid, recording the minimum value of the error coefficients, and recording the offset coordinates when the error coefficients take the minimum value as the optimal coordinates.
The light source group comprises 4 light sources and is arranged at the middle point of 4 sides of a square with the side length s; the camera (2) is arranged at the center of the square and the direction of the optical axis of the camera is vertical to the square; when shooting, only one of the light sources is in a lighting state; the brightness of the individual light sources is the same and the distance between each other can be measured accurately.
The image processing unit calculates the first plane model by the gray scale information in the first plane image, the distance between the light sources in the light source group, the coordinates of the camera and the distance between the camera and the plane template.
The first plane image comprises 4 pictures, 4 gray-scale images are obtained after the 4 pictures are grayed, and pixels at the same position in the 4 gray-scale images correspond to the same point in the plane model; setting the size of the gray scale image as m pixel by n pixel, taking the pixel at the upper left corner of the gray scale image as an origin, horizontally and rightwards as a y-axis direction, vertically and downwards as an x-axis direction, and establishing a rectangular coordinate system in the direction facing the camera as the positive direction of a z axis; the length of 1 mm in the plane template corresponds to r pixels in the gray scale image, the distance between the camera and the plane template is d mm, and the camera coordinate in the rectangular coordinate system is (m/2, n/2, d r);
in the gray-scale map as described above,
Figure RE-DEST_PATH_IMAGE002
a gray matrix representing the k-th gray map,
Figure RE-DEST_PATH_IMAGE004
the coordinates of the pixel of the ith row and the jth column in the rectangular coordinate system are represented,
Figure RE-DEST_PATH_IMAGE006
represents a direction vector from the kth light source to the pixel of the jth row and the jth column in the gray scale image,
Figure RE-DEST_PATH_IMAGE008
a surface normal vector of a point, corresponding to the pixel of the ith row and the jth column in the gray scale image, on the plane template in the rectangular coordinate system is represented; according to the lambertian reflection law, when the reflected light of the planar template is diffuse reflection, the following equation exists for any point of the surface of the planar template:
Figure DEST_PATH_FDA0002374579380000023
solving the equation to obtain
Figure RE-DEST_PATH_IMAGE008A
And then solving the gradient of any point on the surface of the plane template; and integrating the gradient to calculate the z coordinate of any point on the surface of the planar template, namely restoring the first planar model.
The center of the light source group coincides with the optimal offset coordinate.
Has the advantages that: the invention provides a light source position calibration method of a three-dimensional reconstruction device, which adjusts light source position parameters in an algorithm through a built-in three-dimensional reconstruction algorithm, performs three-dimensional reconstruction on a plane template by utilizing a group of pictures shot by taking the plane template as an object in a shooting link, fits a reconstruction result into a plane, calculates the difference between the reconstruction result and the plane, determines the optimal light source position parameters, can effectively correct the influence of errors in the installation process of the device on the reconstruction result, and improves the quality of the three-dimensional reconstruction.
Drawings
Fig. 1 is a schematic mechanical structure diagram of a three-dimensional reconstruction apparatus for light source position calibration according to an embodiment of the present application;
the reference numbers illustrate:
the light source system comprises a plane template 1, a camera 2, a first light source 3, a second light source 4, a third light source 5 and a fourth light source 6.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention provides a light source position calibration method of a three-dimensional reconstruction device, which comprises a light source group, a camera, a controller, a plane template and an image processor, wherein the light source position calibration of the three-dimensional reconstruction device comprises three links. The following describes embodiments of the present application in further detail with reference to the accompanying drawings.
Referring to fig. 1, the mechanical structure of the present application includes a planar template 1, a camera 2, a first light source 3, a second light source 4, a third light source 5, and a fourth light source 6.
The invention relates to a light source position calibration method of a three-dimensional reconstruction device, which comprises a light source group, a camera, a controller, a plane template and an image processor, and is characterized in that: the working state of the device comprises a shooting link, a preparation link and an operation link;
the shooting link is as follows:
in the shooting link, the controller controls all light sources in the light source group to be sequentially lightened, and immediately controls the camera to shoot the plane template placed in front of the light source group and the camera after each light source is lightened, and a group of pictures obtained are called as first plane images;
the preparation process comprises the following steps:
in the preparation link, a three-dimensional rectangular coordinate system is established by taking the plane where the first plane image is located as an x-y plane and taking the upper left corner of the first plane image as an origin; setting a suitable detection step size in said image processor, establishing a detection grid parallel to the (x, y) plane along the x, y axis direction, wherein the distance between adjacent grid points is said detection step size, the z axis of said three-dimensional rectangular coordinate system passes through one of said grid points, said camera position is coplanar with said detection grid, and the projection of said detection grid to the z =0 plane is entirely within said first plane image;
the operation link is as follows:
in the operation link, the image processor sequentially sets the (x, y) coordinates of the offset coordinates as the (x, y) coordinates of each grid point in the detection grid, and sets the z coordinates of the offset coordinates as the distance between the camera and the plane template; carrying out three-dimensional reconstruction on the plane template according to a three-dimensional reconstruction algorithm built in the image processor and the first plane image to obtain a first plane model; vertically projecting the first plane model to an x-y plane, and uniformly dividing a projection area into error grids with the size of m x n along x and y axes; the collection of points in the first plane model, which have the same (x, y) coordinates as the error grid points, is recorded as a first point cloud;
and in the operation link, the image processing unit fits the first point cloud into a plane in the three-dimensional rectangular coordinate system, and the plane is called as a second plane. Calculating the square sum of the distances between all the points in the first point cloud and the second plane, and the square sum is called as an error coefficient; comparing error coefficients obtained when the offset coordinates are located at each grid point in the detection grid, recording the minimum value of the error coefficients, and recording the offset coordinates when the error coefficients take the minimum value as the optimal coordinates;
if the detection step length is larger than 2 pixels after the operation link is finished, deleting the lattice points in the detection grid, the lattice point distance of which is superposed with the optimal coordinate and is larger than the detection step length; halving the detection step length and adding new lattice points in the detection grid; the distance between adjacent grid points of the detection grid after the grid points are added is a detection step length, and the area of the detection grid is not increased; then repeating the operation link;
and after the operation link is finished, if the detection step length is less than 2 pixels, recording the optimal coordinate as an optimal offset coordinate.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, but any modifications or equivalent variations made according to the technical spirit of the present invention are within the scope of the present invention as claimed.

Claims (5)

1. A light source position calibration method of a three-dimensional reconstruction device is characterized by comprising the following steps: the three-dimensional reconstruction device comprises a light source group, a camera (2), a controller, a plane template (1) and an image processor, wherein the camera (2) is located in the middle, the light source group comprises a first light source (3), a second light source (4), a third light source (5) and a fourth light source (6) which are respectively located on 4 sides of the camera (2), and the plane template (1) is located above the camera (2) and the light source group;
the light source position calibration method of the three-dimensional reconstruction device comprises a shooting link, a preparation link and an operation link; in the preparation link, the shot plane image is grayed, the light source position parameters are adjusted through a built-in three-dimensional reconstruction algorithm, a group of pictures shot by taking the plane template as an object in the shooting link are utilized to carry out three-dimensional reconstruction on the plane template, and the optimal light source position parameters are determined through the reconstruction result;
the offset coordinate when the error coefficient is the minimum value obtained in the operation link is recorded as the optimal coordinate; if the detection step length is larger than 2 pixels after the operation link is finished, deleting the lattice points, the distance of which is larger than the detection step length, of the lattice points in the detection grid, which are overlapped with the optimal coordinates; halving the detection step length and adding new lattice points in the detection grid; the distance between adjacent grid points of the detection grid after the new grid point is added is a detection step length, and the area of the detection grid is not increased; then repeating the operation link; after the operation link is finished, if the detection step length is less than 2 pixels, recording the optimal coordinate as an optimal offset coordinate;
the shooting link is as follows: the controller controls all light sources in the light source group to be sequentially lightened, and immediately controls the camera to shoot the plane template placed in front of the light source group and the camera after each light source is lightened, and a group of obtained photos are called as a first plane image;
the preparation link establishes a three-dimensional rectangular coordinate system by taking the plane where the first plane image is located as an xy plane and the upper left corner of the first plane image as an origin; setting a detection step length in an image processor, establishing a detection grid parallel to an xy plane along the directions of x and y axes, wherein the distance between adjacent grid points is the detection step length, a z axis of a three-dimensional rectangular coordinate system passes through one of the grid points, the position of a camera is coplanar with the detection grid, and the projection of the detection grid to the z-0 plane is totally positioned in the first plane image;
in the operation link, the image processor sets the (x, y) coordinates of the offset coordinates as the (x, y) coordinates of each grid point in the detection grid in sequence, and sets the z coordinates of the offset coordinates as the distance between the camera and the plane template; carrying out three-dimensional reconstruction on the plane template according to a three-dimensional reconstruction algorithm built in the image processor and the first plane image to obtain a first plane model; vertically projecting the first plane model to an x-y plane, and uniformly dividing a projection area into error grids with the size of m x n along x and y axes; the collection of points in the first plane model, which have the same (x, y) coordinates as the error grid points, is recorded as a first point cloud;
in the operation link, the image processing unit fits the first point cloud into a plane in a three-dimensional rectangular coordinate system, and the plane is called as a second plane; calculating the square sum of the distances between all points in the first point cloud and the second plane, and the square sum is called as an error coefficient; comparing error coefficients obtained when the offset coordinates are located at each grid point in the detection grid, recording the minimum value of the error coefficients, and recording the offset coordinates when the error coefficients take the minimum value as the optimal coordinates;
the center of the light source group coincides with the optimal offset coordinate.
2. The method for calibrating the light source position of the three-dimensional reconstruction device according to claim 1, wherein: the light source group comprises 4 light sources and is arranged at the middle point of 4 sides of a square with the side length s; the camera (2) is arranged at the center of the square and the direction of the optical axis of the camera is vertical to the square; when shooting, only one of the light sources is in a lighting state; the brightness of the individual light sources is the same and the distance between each other can be measured accurately.
3. The method for calibrating the light source position of the three-dimensional reconstruction device according to claim 1, wherein: the image processing unit calculates a first plane model according to the gray information in the first plane image, the distance between the light sources in the light source group, the coordinate of the camera and the distance between the camera and the plane template.
4. The method for calibrating the light source position of the three-dimensional reconstruction device according to claim 1, wherein: the first plane image comprises 4 pictures, 4 gray-scale images are obtained after the 4 pictures are grayed, and pixels at the same position in the 4 gray-scale images correspond to the same point in the plane model; setting the size of a gray scale image as m x n pixels, taking the pixel at the upper left corner of the gray scale image as an origin, horizontally and rightwards as a y-axis direction, vertically and downwards as an x-axis direction, and establishing a rectangular coordinate system in the direction facing the camera as the positive direction of a z-axis; the length of 1 mm in the plane template corresponds to r pixels in the gray scale map, the distance between the camera and the plane template is d mm, and the camera coordinate in the rectangular coordinate system is (m/2, n/2, d r).
5. The method for calibrating the light source position of the three-dimensional reconstruction device according to claim 4, wherein: in the gray scale image, EkijA gray matrix representing the kth gray map, k being 1,2,3,4, PijThe coordinates of the pixel of the ith row and the jth column in the rectangular coordinate system are shown,
Figure FDA0002374579380000021
a direction vector representing the direction from the kth light source to the pixel at the ith row and jth column in the gray-scale map, k is 1,2,3,4,
Figure FDA0002374579380000022
representing the surface normal vector of the corresponding point of the pixel of the ith row and the jth column in the gray scale image on the plane template in a rectangular coordinate system; according to the lambertian reflection law, when the reflected light of the planar template is diffuse reflection, the following equation exists for any point of the surface of the planar template:
Figure FDA0002374579380000023
solving the equation to obtain
Figure FDA0002374579380000024
Solving the gradient of any point on the surface of the plane template; and integrating the gradient to calculate the z coordinate of any point on the surface of the planar template, namely restoring the first planar model.
CN201810507975.1A 2018-05-24 2018-05-24 Light source position calibration method of three-dimensional reconstruction device Expired - Fee Related CN109059800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810507975.1A CN109059800B (en) 2018-05-24 2018-05-24 Light source position calibration method of three-dimensional reconstruction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810507975.1A CN109059800B (en) 2018-05-24 2018-05-24 Light source position calibration method of three-dimensional reconstruction device

Publications (2)

Publication Number Publication Date
CN109059800A CN109059800A (en) 2018-12-21
CN109059800B true CN109059800B (en) 2020-04-24

Family

ID=64820218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810507975.1A Expired - Fee Related CN109059800B (en) 2018-05-24 2018-05-24 Light source position calibration method of three-dimensional reconstruction device

Country Status (1)

Country Link
CN (1) CN109059800B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014112043A (en) * 2012-12-05 2014-06-19 Ihi Corp Stereoscopic shape recognition device and light source position estimation method for stereoscopic shape recognition device
CN107194881A (en) * 2017-03-23 2017-09-22 南京汇川图像视觉技术有限公司 A kind of removal image reflex reflector and method based on photometric stereo
CN107657604A (en) * 2017-09-06 2018-02-02 西安交通大学 A kind of polishing scratch three-dimensional appearance original position acquisition methods based near field non-standard light source
CN107677216A (en) * 2017-09-06 2018-02-09 西安交通大学 A kind of multiple abrasive particle three-dimensional appearance synchronous obtaining methods based on photometric stereo vision
CN207365904U (en) * 2017-06-01 2018-05-15 深度创新科技(深圳)有限公司 Three-dimensional reconstruction apparatus and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2822273B1 (en) * 2001-03-13 2003-07-11 Ge Med Sys Global Tech Co Llc CALIBRATION PROCESS FOR THE RECONSTRUCTION OF THREE-DIMENSIONAL MODELS FROM IMAGES OBTAINED BY TOMOGRAPHY

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014112043A (en) * 2012-12-05 2014-06-19 Ihi Corp Stereoscopic shape recognition device and light source position estimation method for stereoscopic shape recognition device
CN107194881A (en) * 2017-03-23 2017-09-22 南京汇川图像视觉技术有限公司 A kind of removal image reflex reflector and method based on photometric stereo
CN207365904U (en) * 2017-06-01 2018-05-15 深度创新科技(深圳)有限公司 Three-dimensional reconstruction apparatus and equipment
CN107657604A (en) * 2017-09-06 2018-02-02 西安交通大学 A kind of polishing scratch three-dimensional appearance original position acquisition methods based near field non-standard light source
CN107677216A (en) * 2017-09-06 2018-02-09 西安交通大学 A kind of multiple abrasive particle three-dimensional appearance synchronous obtaining methods based on photometric stereo vision

Also Published As

Publication number Publication date
CN109059800A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
JP5999615B2 (en) Camera calibration information generating apparatus, camera calibration information generating method, and camera calibration information generating program
CN108562250B (en) Keyboard keycap flatness rapid measurement method and device based on structured light imaging
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN111192235B (en) Image measurement method based on monocular vision model and perspective transformation
CN108090896B (en) Wood board flatness detection and machine learning method and device and electronic equipment
CN106780623A (en) A kind of robotic vision system quick calibrating method
CN1158684A (en) Method and appts. for transforming coordinate systems in an automated video monitor alignment system
CN107505324A (en) 3D scanning means and scan method based on binocular collaboration laser
CN109297433A (en) 3D vision guide de-stacking measuring system and its control method
CN108063940B (en) Correction system and method for human eye tracking naked eye 3D display system
CN111709985A (en) Underwater target ranging method based on binocular vision
CN114283203A (en) Calibration method and system of multi-camera system
US9204130B2 (en) Method and system for creating a three dimensional representation of an object
CN111105467B (en) Image calibration method and device and electronic equipment
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN100359286C (en) Method for improving laser measuring accuracy in image processing
CN115371577A (en) Method for extracting surface luminous points of transparent piece and method for measuring surface shape
CN113052974B (en) Method and device for reconstructing three-dimensional surface of object
CN109059800B (en) Light source position calibration method of three-dimensional reconstruction device
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment
CN114648588A (en) Lens calibration and correction method based on neural network
CN112595262A (en) Binocular structured light-based high-light-reflection surface workpiece depth image acquisition method
CN111986266A (en) Photometric stereo light source parameter calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200424