CN110686593B - Method for measuring relative position relation of image sensors in spliced focal plane - Google Patents

Method for measuring relative position relation of image sensors in spliced focal plane Download PDF

Info

Publication number
CN110686593B
CN110686593B CN201910847311.4A CN201910847311A CN110686593B CN 110686593 B CN110686593 B CN 110686593B CN 201910847311 A CN201910847311 A CN 201910847311A CN 110686593 B CN110686593 B CN 110686593B
Authority
CN
China
Prior art keywords
image sensor
interference fringe
star
transverse
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910847311.4A
Other languages
Chinese (zh)
Other versions
CN110686593A (en
Inventor
曹阳
李保权
李海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Space Science Center of CAS
Original Assignee
National Space Science Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Space Science Center of CAS filed Critical National Space Science Center of CAS
Priority to CN201910847311.4A priority Critical patent/CN110686593B/en
Publication of CN110686593A publication Critical patent/CN110686593A/en
Application granted granted Critical
Publication of CN110686593B publication Critical patent/CN110686593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a method for measuring the relative position relationship of image sensors in a spliced focal plane, which comprises the following steps: acquiring a group of transverse dynamic interference fringe images acquired by each image sensor; obtaining a wave vector and an initial phase of a transverse interference fringe on each image sensor; calculating a relative rotation angle between each image sensor and the reference image sensor; acquiring a group of longitudinal dynamic interference fringe images acquired by each image sensor; obtaining a wave vector and an initial phase of a longitudinal interference fringe on each image sensor; acquiring star map images shot by each image sensor, and calculating rough measurement results of relative positions among the image sensors according to the relative rotation angle; and calculating the final result of the relative position between the image sensors by using the transverse interference fringe wave vector and the initial phase of each image sensor, the longitudinal interference fringe wave vector and the initial phase of each image sensor and the rough measurement result of the relative position between the image sensors.

Description

Method for measuring relative position relation of image sensors in spliced focal plane
Technical Field
The invention relates to the technical field of astronomy and space, in particular to a space large-scale area array focal plane splicing technology, and specifically relates to a method for measuring relative position relation of image sensors in a spliced focal plane.
Background
The continuous development of the space remote sensing technology continuously increases the technical index requirements of the space camera. The field angle is an important indicator of a space camera, and a longer imaging focal plane is required to obtain a larger field of view. Because the length of the single-chip image sensor is limited, the prior art adopts more solutions to splice a plurality of image sensors to form a larger spliced focal plane so as to increase the field of view, and the imaging system adopting the focal plane splicing technology in the world comprises JWST, GEO-Africa, Gaia, GEO-OCULUS and the like.
In some applications, such as high precision celestial body measurement, it is necessary to accurately measure the position of the star point image on the focal plane, and in these cases, it is necessary to accurately know the positional relationship between different image sensors in the stitched focal plane. The precision of the focal plane mechanical splicing process is about at a pixel level, and in order to meet the requirement of high-precision celestial body measurement application, the measurement precision of the position of an image sensor needs to reach a millipixel level or even a micro pixel level.
The traditional method for measuring the spliced focal plane is mainly a laser ranging method, the relative position relation between image sensors can be conveniently measured by the method, but the precision of the method is limited by factors such as the precision of a laser range finder and the verticality of laser ranging, and the precision of a measuring result is difficult to further improve. Another method for measuring a focal plane is an interferometric method, which has very high measurement accuracy, but for a stitched focal plane, because a certain gap exists between different image sensors on the stitched focal plane, pixels of the image sensors are not continuous, so that the conventional interferometric technology has no way to directly measure the distance between the image sensors due to the existence of a phase ambiguity problem.
Disclosure of Invention
The invention aims to overcome the defect of low precision of measuring the relative position relationship of an image sensor in a spliced focal plane by a conventional method, and provides a method for measuring the relative position relationship of the image sensor in the spliced focal plane.
In order to achieve the above object, the present invention provides a method for measuring a relative position relationship of image sensors in a stitched focal plane, the method comprising:
acquiring a group of transverse dynamic interference fringe images acquired by each image sensor;
processing each group of transverse dynamic interference fringe images to obtain a wave vector and an initial phase of a transverse interference fringe on each image sensor;
calculating the relative rotation angle between each image sensor and a reference image sensor by using the wave vector of the transverse interference fringe on each image sensor and selecting one image sensor as the reference image sensor;
acquiring a group of longitudinal dynamic interference fringe images acquired by each image sensor;
processing each group of longitudinal dynamic interference fringe images to obtain a wave vector and an initial phase of a longitudinal interference fringe on each image sensor;
acquiring star map images shot by each image sensor through an optical system, and calculating rough measurement results of relative positions among the image sensors according to the relative rotation angle;
and calculating the final result of the relative position between the image sensors by using the rough measurement results of the transverse interference fringe wave vector and the initial phase of each image sensor, the longitudinal interference fringe wave vector and the initial phase of each image sensor and the relative position between the image sensors.
As an improvement of the above method, the method further comprises:
generating transverse dynamic interference fringes on a focal plane by using a difference frequency laser beam interference method, controlling all image sensors on the focal plane to be exposed at a fixed frame frequency within a period of time, and acquiring a group of transverse dynamic interference fringe images by all the image sensors;
the fringe spacing of the lateral dynamic interference fringes is at least more than twice the pixel spacing; velocity v of movement of simultaneous lateral dynamic interference fringes on image sensor surface1Pixel size d, frame frequency f, satisfies the following relationship:
Figure BDA0002195666420000021
as an improvement of the above method, each group of the lateral dynamic interference fringe images is processed to obtain a wave vector and an initial phase of the lateral interference fringe on each image sensor; the method specifically comprises the following steps:
under dynamic fringe illumination, the output of the image sensor is:
Figure BDA0002195666420000022
wherein Imn(t) is the output value of the pixel at time of death (m, n), (k)x,ky) Respectively the lateral component and the longitudinal component of the wave vector of the interference fringe image,IDC、IACrespectively, the magnitude of the direct current component and the alternating current component of the interference fringe, delta omega is the frequency difference between the difference frequency laser beams,
Figure BDA0002195666420000023
is the initial phase;
fitting each group of transverse interference fringe images by using a formula (1) respectively to obtain (k) of each group of datax,h,ky,h) And
Figure BDA0002195666420000024
that is, the wave vector and the initial time phase of the transverse interference fringe on the corresponding image sensor are used
Figure BDA0002195666420000025
And
Figure BDA0002195666420000031
to express the wave vector and initial phase of the transverse interference fringe on the ith image sensor, wherein i is more than or equal to 1 and less than or equal to M, and M is the number of the image sensors.
As an improvement of the above method, the relative rotation angle between each image sensor and the reference image sensor is calculated by using the wave vector of the lateral interference fringe on each image sensor, and optionally using one image sensor as the reference image sensor; the method specifically comprises the following steps:
calculating the rotation angle theta of the ith image sensor relative to the mth image sensor by using the mth image sensor as a reference image sensor(i)Comprises the following steps:
Figure BDA0002195666420000032
as an improvement of the above method, the method further comprises: generating longitudinal dynamic interference fringes on a focal plane by using a difference frequency laser beam interference method, controlling all image sensors on the focal plane to be exposed at a fixed frame frequency within a period of time, and acquiring a group of longitudinal dynamic interference fringe images by all the image sensors;
the fringe spacing of the longitudinal dynamic interference fringes is at least more than twice the pixel spacing; moving speed v of longitudinal dynamic interference fringe on surface of image sensor2Pixel size d, frame frequency f, satisfies the following relationship:
Figure BDA0002195666420000033
as an improvement of the above method, each group of longitudinal dynamic interference fringe images is processed to obtain a wave vector and an initial phase of a longitudinal interference fringe on each image sensor; the method specifically comprises the following steps:
under dynamic fringe illumination, the output of the image sensor is:
Figure BDA0002195666420000034
wherein Imn(t) is the output value of the pixel at time of death (m, n), (k)x,ky) Respectively the transverse component and the longitudinal component of the wave vector of the interference fringe image, IDC、IACRespectively, the magnitude of the direct current component and the alternating current component of the interference fringe, delta omega is the frequency difference between the difference frequency laser beams,
Figure BDA0002195666420000035
is the initial phase;
fitting each group of longitudinal interference fringe images by using a formula (1) respectively to obtain (k) of each group of datax,v,ky,v) And
Figure BDA0002195666420000036
that is, the wave vector and the initial time phase of the longitudinal interference fringe on the corresponding image sensor are used
Figure BDA0002195666420000037
And initial phase
Figure BDA0002195666420000038
To express the wave vector and initial phase of the longitudinal interference fringe on the ith image sensor, wherein i is more than or equal to 1 and less than or equal to M, and M is the number of the image sensors.
As an improvement of the above method, the method includes acquiring a star map image captured by each image sensor through an optical system, and calculating a rough measurement result of the relative position between the image sensors according to the relative rotation angle;
the positions of the image sensors are as follows: coordinates of the position of the (0, 0) pixel of each image sensor on the reference image sensor coordinate system;
taking two star points of the star map image on the reference image sensor and 1 star point on the star map image of the ith image sensor, calculating the mass center coordinates of the 3 star points on the respective image sensors by using (x)c,yc) A centroid coordinate of the star point on the ith image sensor representing the star map image of the ith image sensor;
calculating the coordinate (x) of the star point of the star map image of the ith image sensor in the coordinate system of the reference image sensor by using the mass center coordinates of the two star points of the star map image on the reference image sensor and combining the parameters of the optical system and the star table datar,yr);
Calculating coordinates of (0, 0) pixel of ith image sensor output image on coordinate system of reference image sensor
Figure BDA0002195666420000041
Figure BDA0002195666420000042
Figure BDA0002195666420000043
Then
Figure BDA0002195666420000044
The rough measurement result is the relative position of the ith image sensor relative to the reference image sensor;
the coarse measurement result of the relative position of the ith block image sensor to the jth block image sensor is
Figure BDA0002195666420000045
Wherein,
Figure BDA0002195666420000046
j is more than or equal to 1 and less than or equal to M, and j is not equal to i.
As an improvement of the method, the coordinates (x) of the star point of the star map image of the ith image sensor in the coordinate system of the reference image sensor are calculated according to the coordinates of the mass centers of the two star points of the star map image on the reference image sensor by combining the parameters of the optical system and star table datar,yr) (ii) a The method specifically comprises the following steps:
according to the mass center coordinates of two star points of the star map image on the reference image sensor, 3-dimensional coordinates of the 2 star points under the optical system body coordinate system are respectively obtained by utilizing a backward calibration model of the imaging system:
Figure BDA0002195666420000047
Figure BDA0002195666420000048
wherein (x)n1,yn1) As the centroid coordinate of the first star point on the reference image sensor, (x)b1,yb1,zb1) Is a 3-dimensional coordinate of a first star point under the optical system body coordinate system; (x)n2,yn2) As the centroid coordinate of the second star point on the reference image sensor, (x)b2,yb2,zb2) Is the three-dimensional coordinate of the second star point under the optical system body coordinate system; f. ofinvIs lightA backward calibration model of an imaging system consisting of an optical system and a reference image sensor; phi is a1Is a parameter of the backward calibration model;
using three-dimensional coordinates (x) of a first star point and a second star point on a reference image sensor in an optical system body coordinate systemb1,yb1,zb1) And (x)b2,yb2,zb2) And the coordinates of the 2 star points in the celestial coordinate system are searched in the star table, and a direction cosine matrix A from the celestial coordinate system to the optical system body coordinate system is calculateddcm
Finding out the coordinate (x) of the star point on the ith image sensor in the celestial coordinate systemcat,ycat,zcat) Using a direction cosine matrix A from a celestial coordinate system to an optical system body coordinate systemdcmAnd calculating the coordinates of the star under the coordinate system of the optical system body:
Figure BDA0002195666420000051
calculating the coordinate (x) of the star point on the ith image sensor in the coordinate system of the reference image sensor by using a forward calibration model of the imaging systemr,yr):
Figure BDA0002195666420000052
Wherein phi is2The parameters of the forward calibration model.
As an improvement of the above method, the calculating a final result of the relative position between the image sensors by using the rough measurement result of the relative position between the image sensors of the transverse interference fringe wave vector and the initial phase of each image sensor, the longitudinal interference fringe wave vector and the initial phase of each image sensor, specifically includes:
final result of relative position of ith image sensor with respect to reference image sensor
Figure BDA0002195666420000053
Comprises the following steps:
Figure BDA0002195666420000054
wherein the integer n is minimized
Figure BDA0002195666420000055
And
Figure BDA0002195666420000056
the distance between them to find:
Figure BDA0002195666420000057
finally, the obtained integer n is substituted into the formula to obtain
Figure BDA0002195666420000058
The final result of the relative position of the ith image sensor relative to the reference image sensor;
the final result of the relative position of the ith block image sensor with respect to the jth block image sensor is
Figure BDA0002195666420000061
Wherein,
Figure BDA0002195666420000062
j is more than or equal to 1 and less than or equal to M, and j is not equal to i, which is the final result of the relative position of the jth image sensor relative to the reference image sensor.
The invention has the advantages that:
1. the method of the invention uses the star point image to carry out auxiliary calibration, uses the interference measurement method to measure the relative position relation of the image sensors in the splicing focal plane, and can accurately measure the relative position relation between the image sensors;
2. the method has high measurement precision, can carry out on-orbit real-time measurement, and can meet the requirements of the fields of high-precision celestial body measurement and the like.
Drawings
FIG. 1 is a flow chart of the method for measuring the relative position relationship of image sensors in the stitched focal plane of the present invention.
Detailed Description
The technical solution of the present invention will be explained in detail below.
As shown in fig. 1, the present invention provides a method for measuring a relative position relationship of image sensors in a stitched focal plane, comprising the following steps:
step 1), generating transverse dynamic interference fringes on a focal plane by using a difference frequency laser beam interference method, controlling all image sensors on the focal plane to be exposed at a fixed frame frequency within a period of time, and acquiring a group of transverse dynamic interference fringe images by each image sensor;
wherein the fringe spacing of the laterally dynamic interference fringes is at least greater than twice the pixel spacing. Velocity v of movement of simultaneous lateral dynamic interference fringes on image sensor surface1The pixel size d, the frame frequency f, should satisfy the following relationship:
Figure BDA0002195666420000063
step 2), processing each group of the transverse dynamic interference fringe images obtained in the step 1) to obtain a wave vector and an initial phase of the transverse interference fringe on each image sensor;
the output of the image sensor under dynamic fringe illumination can be described by the following formula:
Figure BDA0002195666420000064
wherein Imn(t) is the output value of the pixel at time of death (m, n), (k)x,ky) Respectively the transverse component and the longitudinal component of the wave vector of the interference fringe image, IDC、IACRespectively, the magnitude of the direct current component and the alternating current component of the interference fringe, delta omega is the frequency difference between the difference frequency laser beams,
Figure BDA0002195666420000065
is the initial phase.
Fitting each group of transverse interference fringe images by using a formula (1) respectively to obtain (k) of each group of datax,ky) And
Figure BDA0002195666420000071
that is, the wave vector and the initial time phase of the transverse interference fringe on the corresponding image sensor are used
Figure BDA0002195666420000072
And
Figure BDA0002195666420000073
to represent the wave vector and the initial phase of the lateral interference fringe on the ith image sensor.
Step 3), calculating the relative rotation angle between each image sensor by using the wave vector of the transverse interference fringe on each image sensor obtained in the step 2);
with reference to the 1 st image sensor, the rotation angles of the other image sensors with respect to the 1 st image sensor (which may be an arbitrarily selected one) are obtained.
Calculating the rotation angle theta of the ith image sensor relative to the 1 st image sensor according to the geometrical relation(i)Comprises the following steps:
Figure BDA0002195666420000074
step 4), utilizing a difference frequency laser beam interference method to generate longitudinal dynamic interference fringes on a focal plane, controlling all image sensors on the focal plane to be exposed at a fixed frame frequency within a period of time, and acquiring a group of longitudinal dynamic interference fringe images by each image sensor;
longitudinal dynamics involvedThe fringe spacing of the interference fringes is at least greater than twice the pixel spacing. Moving speed v of longitudinal dynamic interference fringe on CCD surface2The pixel size d, the frame frequency f, should satisfy the following relationship:
Figure BDA0002195666420000075
step 5), processing each group of longitudinal dynamic interference fringe images obtained in the step 4) to obtain a wave vector and an initial phase of a longitudinal interference fringe on each image sensor;
the wave vector of the longitudinal interference fringe on each image sensor can be obtained by a method similar to step 2)
Figure BDA0002195666420000076
And initial phase
Figure BDA0002195666420000077
And 6), installing an optical system, and shooting by aiming at the starry sky to obtain a group of star maps, wherein one image is shot by each image sensor.
Step 7), processing by using the rotation angle between the image sensors obtained in the step 3) and the group of star maps obtained in the step 6) to obtain rough measurement results of the relative positions between the image sensors;
since there is rotation between the image sensors, the position of each image sensor is represented using the coordinates of the position of the (0, 0) pixel on each image sensor on the image sensor 1 coordinate system.
Taking the 2 nd image sensor as an example, its position relative to the 1 st image sensor is calculated. Taking two star points of the star map image on the 1 st image sensor and 1 star point of the star map image on the 2 nd image sensor, calculating the mass center coordinates of the 3 star points on the respective image sensors by using (x)c,yc) The centroid coordinates of 1 star point of the star map image on the 2 nd image sensor are represented on the 2 nd image sensor.
And then, respectively obtaining 3-dimensional coordinates of the 2 stars under the optical system body coordinate system by using the centroid coordinates of the 2 star points on the 1 st image sensor and a backward calibration model of the imaging system. The method for calculating the 3-dimensional coordinate of a star point on an image sensor under the optical system body coordinate system by using the centroid coordinate of the star point is as follows:
according to the centroid coordinates of two star points of the star map image on the 1 st image sensor, the 3-dimensional coordinates of the 2 star points under the optical system body coordinate system are respectively obtained by utilizing a backward calibration model of the imaging system:
Figure BDA0002195666420000081
wherein (x)n1,yn1) As the centroid coordinate of the first star point on the 1 st image sensor, (x)b1,yb1,zb1) Is a 3-dimensional coordinate of a first star point under the optical system body coordinate system; (x)n2,yn2) (x) the centroid coordinate of the second star point on the 1 st image sensorb2,yb2,zb2) Is a 3-dimensional coordinate of a second star point under the optical system body coordinate system; f. ofinvThe system is a backward calibration model of an imaging system consisting of an optical system and a reference image sensor; phi is a1Is a parameter of the backward calibration model;
using the 3-dimensional coordinates (x) of the first star point and the second star point on the 1 st image sensor in the optical system body coordinate systemb1,yb1,zb1) And (x)b2,yb2,zb2) And the coordinates of the 2 star points searched in the star table under the celestial coordinate system can calculate the direction cosine matrix A from the celestial coordinate system to the body coordinate system of the optical system by utilizing the attitude calculation algorithm such as QUEST, TRIAD and the likedcm
Finding out the coordinates (x) of the star point on the 2 nd image sensor in the celestial coordinate systemcat,ycat,zcat) Using the direction cosine from the celestial coordinate system to the optical system body coordinate systemMatrix AdcmAnd calculating the coordinates of the star under the coordinate system of the optical system body:
Figure BDA0002195666420000082
then, the coordinates (x) of the star point on the 2 nd image sensor under the 1 st image sensor coordinate system are calculated by utilizing a forward calibration model of the imaging systemr,yr) The calculation method is as follows:
Figure BDA0002195666420000083
wherein f isforThe method is a forward calibration model of an imaging system consisting of an optical system and a 1 st image sensor. Phi is a2The parameters of the forward calibration model can be obtained by calibrating the imaging system.
From the geometrical relationship, the coordinates (x) of the (0, 0) pixel on the 2 nd image sensor in the 1 st image sensor coordinate system can be calculatedp,yp) Comprises the following steps:
xp=xr-xccosθ(2)+ycsinθ(2),yp=yr-xcsinθ(2)-yccosθ(2) (6)
(xp,yp) That is, the rough position measurement result of the 2 nd image sensor relative to the 1 st image sensor, the rough position measurement results of the other image sensors can be given by the same method, and the rough position measurement result of the ith image sensor relative to the 1 st image sensor is not limited to
Figure BDA0002195666420000091
And 8) calculating to obtain a precise measurement result of the relative position between the image sensors by using the transverse interference fringe wave vector and the initial phase of each image sensor obtained in the step 2), the longitudinal interference fringe wave vector and the initial phase of each image sensor obtained in the step 5) and the rough measurement result of the relative position between the image sensors obtained in the step 7).
In step 8), the method for calculating the precise measurement result of the relative position between the image sensors is as follows:
according to the geometrical relation, the real coordinate of the pixel of the ith image sensor (0, 0) in the coordinate system of the 1 st image sensor can be calculated by utilizing the transverse interference fringe wave vector and the initial phase of each image sensor and the longitudinal interference fringe wave vector and the initial phase of each image sensor
Figure BDA0002195666420000092
The following relationship is satisfied:
Figure BDA0002195666420000093
where n is an unknown integer. The solution of equation (4) is:
Figure BDA0002195666420000094
the integer n can be minimized
Figure BDA0002195666420000095
And
Figure BDA0002195666420000096
the distance between them to find:
Figure BDA0002195666420000097
finally, the obtained integer n is substituted into the formula (7) to obtain
Figure BDA0002195666420000098
Is the final accurate result of the relative position between the image sensors.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (1)

1. A method of measuring relative positional relationships of image sensors in a stitched focal plane, the method comprising:
acquiring a group of transverse dynamic interference fringe images acquired by each image sensor;
processing each group of transverse dynamic interference fringe images to obtain a wave vector and an initial phase of a transverse interference fringe on each image sensor;
calculating the relative rotation angle between each image sensor and a reference image sensor by using the wave vector of the transverse interference fringe on each image sensor and selecting one image sensor as the reference image sensor;
acquiring a group of longitudinal dynamic interference fringe images acquired by each image sensor;
processing each group of longitudinal dynamic interference fringe images to obtain a wave vector and an initial phase of a longitudinal interference fringe on each image sensor;
acquiring star map images shot by each image sensor through an optical system, and calculating rough measurement results of relative positions among the image sensors according to the relative rotation angle;
calculating the final result of the relative position between the image sensors by using the vector and the initial phase of the transverse interference fringe wave of each image sensor, the vector and the initial phase of the longitudinal interference fringe wave of each image sensor and the rough measurement result of the relative position between the image sensors;
the method further comprises the following steps:
generating transverse dynamic interference fringes on a focal plane by using a difference frequency laser beam interference method, controlling all image sensors on the focal plane to be exposed at a fixed frame frequency within a period of time, and acquiring a group of transverse dynamic interference fringe images by all the image sensors;
the fringe spacing of the lateral dynamic interference fringes is at least more than twice the pixel spacing; velocity v of movement of simultaneous lateral dynamic interference fringes on image sensor surface1Pixel size d, frame frequency f, satisfies the following relationship:
Figure FDA0002823523310000011
processing each group of transverse dynamic interference fringe images to obtain a wave vector and an initial phase of a transverse interference fringe on each image sensor; the method specifically comprises the following steps:
under dynamic fringe illumination, the output of the image sensor is:
Figure FDA0002823523310000012
wherein Imn(t) is the output value of the pixel at time (m, n) t, (k)x,ky) Respectively the transverse component and the longitudinal component of the wave vector of the interference fringe image, IDC、IACRespectively, the magnitude of the direct current component and the alternating current component of the interference fringe, delta omega is the frequency difference between the difference frequency laser beams,
Figure FDA0002823523310000013
is the initial phase;
fitting each group of transverse interference fringe images by using a formula (1) respectively to obtain (k) of each group of datax,h,ky,h) And
Figure FDA0002823523310000021
that is, the wave vector and the initial time phase of the transverse interference fringe on the corresponding image sensor are used
Figure FDA0002823523310000022
And
Figure FDA0002823523310000023
to express the wave vector and initial phase of the transverse interference fringe on the ith image sensor, wherein i is more than or equal to 1 and less than or equal to M, and M is the number of the image sensors;
calculating a relative rotation angle between each image sensor and a reference image sensor by using a wave vector of the transverse interference fringe on each image sensor and selecting one image sensor as the reference image sensor; the method specifically comprises the following steps:
calculating the rotation angle theta of the ith image sensor relative to the mth image sensor by using the mth image sensor as a reference image sensor(i)Comprises the following steps:
Figure FDA0002823523310000024
the method further comprises the following steps: generating longitudinal dynamic interference fringes on a focal plane by using a difference frequency laser beam interference method, controlling all image sensors on the focal plane to be exposed at a fixed frame frequency within a period of time, and acquiring a group of longitudinal dynamic interference fringe images by all the image sensors;
the fringe spacing of the longitudinal dynamic interference fringes is at least more than twice the pixel spacing; moving speed v of longitudinal dynamic interference fringe on surface of image sensor2Pixel size d, frame frequency f, satisfies the following relationship:
Figure FDA0002823523310000025
processing each group of longitudinal dynamic interference fringe images to obtain a wave vector and an initial phase of a longitudinal interference fringe on each image sensor; the method specifically comprises the following steps:
under dynamic fringe illumination, the output of the image sensor is:
Figure FDA0002823523310000026
wherein Imn(t) is the output value of the pixel at time (m, n) t, (k)x,ky) Respectively the transverse component and the longitudinal component of the wave vector of the interference fringe image, IDC、IACRespectively, the magnitude of the direct current component and the alternating current component of the interference fringe, delta omega is the frequency difference between the difference frequency laser beams,
Figure FDA0002823523310000027
is the initial phase;
fitting each group of longitudinal interference fringe images by using a formula (1) respectively to obtain (k) of each group of datax,v,ky,v) And
Figure FDA0002823523310000031
that is, the wave vector and the initial time phase of the longitudinal interference fringe on the corresponding image sensor are used
Figure FDA0002823523310000032
And initial phase
Figure FDA0002823523310000033
To express the wave vector and initial phase of the longitudinal interference fringe on the ith image sensor, wherein i is more than or equal to 1 and less than or equal to M, and M is the number of the image sensors;
acquiring star map images shot by each image sensor through an optical system, and calculating rough measurement results of relative positions among the image sensors according to the relative rotation angle;
the positions of the image sensors are as follows: coordinates of the position of the (0, 0) pixel of each image sensor on the reference image sensor coordinate system;
taking two star points of the star map image on the reference image sensor and 1 star point on the star map image of the ith image sensor, calculating the mass center coordinates of the 3 star points on the respective image sensors by using (x)c,yc) The star representing the star map image of the ith image sensorCentroid coordinates of the point on the ith image sensor;
calculating the coordinate (x) of the star point of the star map image of the ith image sensor in the coordinate system of the reference image sensor by using the mass center coordinates of the two star points of the star map image on the reference image sensor and combining the parameters of the optical system and the star table datar,yr);
Calculating coordinates of (0, 0) pixel of ith image sensor output image on coordinate system of reference image sensor
Figure FDA0002823523310000034
Figure FDA0002823523310000035
Figure FDA0002823523310000036
Then
Figure FDA0002823523310000037
The rough measurement result is the relative position of the ith image sensor relative to the reference image sensor;
the coarse measurement result of the relative position of the ith block image sensor to the jth block image sensor is
Figure FDA0002823523310000038
Wherein,
Figure FDA0002823523310000039
j is more than or equal to 1 and less than or equal to M, and j is not equal to i;
according to the centroid coordinates of two star points of the star map image on the reference image sensor, the parameters of the optical system and star table data are combined to calculate the star map image of the ith image sensorCoordinates (x) of star point in reference image sensor coordinate systemr,yr) (ii) a The method specifically comprises the following steps:
according to the mass center coordinates of two star points of the star map image on the reference image sensor, 3-dimensional coordinates of the 2 star points under the optical system body coordinate system are respectively obtained by utilizing a backward calibration model of the imaging system:
Figure FDA0002823523310000041
Figure FDA0002823523310000042
wherein (x)n1,yn1) As the centroid coordinate of the first star point on the reference image sensor, (x)b1,yb1,zb1) Is a 3-dimensional coordinate of a first star point under the optical system body coordinate system; (x)n2,yn2) As the centroid coordinate of the second star point on the reference image sensor, (x)b2,yb2,zb2) Is the three-dimensional coordinate of the second star point under the optical system body coordinate system; f. ofinvThe system is a backward calibration model of an imaging system consisting of an optical system and a reference image sensor; phi is a1Is a parameter of the backward calibration model;
using three-dimensional coordinates (x) of a first star point and a second star point on a reference image sensor in an optical system body coordinate systemb1,yb1,zb1) And (x)b2,yb2,zb2) And the coordinates of the 2 star points in the celestial coordinate system are searched in the star table, and a direction cosine matrix A from the celestial coordinate system to the optical system body coordinate system is calculateddcm
Finding out the coordinate (x) of the star point on the ith image sensor in the celestial coordinate systemcat,ycat,zcat) Using a direction cosine matrix A from a celestial coordinate system to an optical system body coordinate systemdcmMeter for measuringAnd calculating the coordinates of the star under the coordinate system of the optical system body:
Figure FDA0002823523310000043
calculating the coordinate (x) of the star point on the ith image sensor in the coordinate system of the reference image sensor by using a forward calibration model of the imaging systemr,yr):
Figure FDA0002823523310000044
Wherein phi is2Parameters of the forward calibration model;
the calculating the final result of the relative position between the image sensors by using the rough measurement result of the transverse interference fringe wave vector and the initial phase of each image sensor, the longitudinal interference fringe wave vector and the initial phase of each image sensor and the relative position between the image sensors specifically comprises the following steps:
final result of relative position of ith image sensor with respect to reference image sensor
Figure FDA0002823523310000045
Comprises the following steps:
Figure FDA0002823523310000046
wherein the integer n is minimized
Figure FDA0002823523310000051
And
Figure FDA0002823523310000052
the distance between them to find:
Figure FDA0002823523310000053
finally, the obtained integer n is substituted into the formula to obtain
Figure FDA0002823523310000054
The final result of the relative position of the ith image sensor relative to the reference image sensor;
the final result of the relative position of the ith block image sensor with respect to the jth block image sensor is
Figure FDA0002823523310000055
Wherein,
Figure FDA0002823523310000056
j is more than or equal to 1 and less than or equal to M, and j is not equal to i, which is the final result of the relative position of the jth image sensor relative to the reference image sensor.
CN201910847311.4A 2019-09-09 2019-09-09 Method for measuring relative position relation of image sensors in spliced focal plane Active CN110686593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910847311.4A CN110686593B (en) 2019-09-09 2019-09-09 Method for measuring relative position relation of image sensors in spliced focal plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910847311.4A CN110686593B (en) 2019-09-09 2019-09-09 Method for measuring relative position relation of image sensors in spliced focal plane

Publications (2)

Publication Number Publication Date
CN110686593A CN110686593A (en) 2020-01-14
CN110686593B true CN110686593B (en) 2021-03-09

Family

ID=69108863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910847311.4A Active CN110686593B (en) 2019-09-09 2019-09-09 Method for measuring relative position relation of image sensors in spliced focal plane

Country Status (1)

Country Link
CN (1) CN110686593B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110686593B (en) * 2019-09-09 2021-03-09 中国科学院国家空间科学中心 Method for measuring relative position relation of image sensors in spliced focal plane
CN114608803B (en) * 2020-12-08 2024-05-14 中国科学院长春光学精密机械与物理研究所 Pixel overlapping precision testing device for camera focal plane optical seamless splicing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002202112A (en) * 2000-11-06 2002-07-19 Fujitsu Ltd Shape measuring apparatus
CN102721378A (en) * 2012-06-20 2012-10-10 北京航空航天大学 Three-dimensional mirror object shape measurement system based on sinusoidal stripe projection
CN104796689A (en) * 2015-04-07 2015-07-22 中国科学院空间科学与应用研究中心 Method for computing position deviation of pixels of CCD (charge coupled device)
CN110686593A (en) * 2019-09-09 2020-01-14 中国科学院国家空间科学中心 Method for measuring relative position relation of image sensors in spliced focal plane

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002202112A (en) * 2000-11-06 2002-07-19 Fujitsu Ltd Shape measuring apparatus
CN102721378A (en) * 2012-06-20 2012-10-10 北京航空航天大学 Three-dimensional mirror object shape measurement system based on sinusoidal stripe projection
CN104796689A (en) * 2015-04-07 2015-07-22 中国科学院空间科学与应用研究中心 Method for computing position deviation of pixels of CCD (charge coupled device)
CN110686593A (en) * 2019-09-09 2020-01-14 中国科学院国家空间科学中心 Method for measuring relative position relation of image sensors in spliced focal plane

Also Published As

Publication number Publication date
CN110686593A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN103759670B (en) A kind of object dimensional information getting method based on numeral up short
CN106871787B (en) Large space line scanning imagery method for three-dimensional measurement
CN109087355B (en) Monocular camera pose measuring device and method based on iterative updating
CN109724586B (en) Spacecraft relative pose measurement method integrating depth map and point cloud
Hu et al. A four-camera videogrammetric system for 3-D motion measurement of deformable object
CN107144278B (en) Lander visual navigation method based on multi-source characteristics
Yu et al. A calibration method based on virtual large planar target for cameras with large FOV
Chen et al. A self-recalibration method based on scale-invariant registration for structured light measurement systems
CN110686593B (en) Method for measuring relative position relation of image sensors in spliced focal plane
CN108692661A (en) Portable three-dimensional measuring system based on Inertial Measurement Unit and its measurement method
CN106489062B (en) System and method for measuring the displacement of mobile platform
Rodríguez-Garavito et al. Automatic laser and camera extrinsic calibration for data fusion using road plane
CN113028990A (en) Laser tracking attitude measurement system and method based on weighted least square
CN109855559A (en) A kind of total space calibration system and method
Liao et al. A dense 3-D point cloud measurement based on 1-D background-normalized Fourier transform
Bikmaev et al. Improving the accuracy of supporting mobile objects with the use of the algorithm of complex processing of signals with a monocular camera and LiDAR
CN109342008A (en) Model in wind tunnel angle of attack one camera video measuring method based on homography matrix
Wang et al. The human-height measurement scheme by using image processing techniques
Wang et al. Calibration Research on Fish-eye lens
WO2020113978A1 (en) Method for calculating center position of hole located on plane
CN114485479B (en) Structured light scanning and measuring method and system based on binocular camera and inertial navigation
CN115049784A (en) Three-dimensional velocity field reconstruction method based on binocular particle image
Zhu et al. A new noncontact flatness measuring system of large 2-D flat workpiece
JP3512894B2 (en) Relative moving amount calculating apparatus and relative moving amount calculating method
Chane et al. Registration of arbitrary multi-view 3D acquisitions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant