CN113156641B - Image space scanning imaging method based on achromatic cascade prism - Google Patents

Image space scanning imaging method based on achromatic cascade prism Download PDF

Info

Publication number
CN113156641B
CN113156641B CN202110206239.4A CN202110206239A CN113156641B CN 113156641 B CN113156641 B CN 113156641B CN 202110206239 A CN202110206239 A CN 202110206239A CN 113156641 B CN113156641 B CN 113156641B
Authority
CN
China
Prior art keywords
image
imaging
prism
achromatic
cascade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110206239.4A
Other languages
Chinese (zh)
Other versions
CN113156641A (en
Inventor
李安虎
刘兴盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110206239.4A priority Critical patent/CN113156641B/en
Publication of CN113156641A publication Critical patent/CN113156641A/en
Application granted granted Critical
Publication of CN113156641B publication Critical patent/CN113156641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/64Imaging systems using optical elements for stabilisation of the lateral and angular position of the image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Image Input (AREA)

Abstract

The invention relates to an image space scanning imaging method based on an achromatic cascade prism, wherein an imaging system comprises an optical lens group, the achromatic cascade prism and a camera which are sequentially arranged along the direction of an optical axis; the image space scanning imaging method comprises the steps of parameter matching and model system construction, primary imaging projection model establishment, achromatic cascade prism scanning motion planning, image space area image acquisition and correction, object space area image sequence registration and large-view-field high-resolution image generation. Compared with the prior art, the invention realizes the visual axis adjustment of image scanning secondary imaging by utilizing the rotary motion of the achromatic cascade prism, combines the reverse analysis of the cascade prism and the rough and fine image registration method, can form a regional image sequence for collecting large visual field, wide spectrum and high resolution by a simple and compact system, and can realize the splicing and fusion of large-scale images with high efficiency and high quality by a flexible and reliable processing method.

Description

Image space scanning imaging method based on achromatic cascade prism
Technical Field
The invention relates to the field of optical imaging, in particular to an image space scanning imaging method based on an achromatic cascade prism.
Background
In the field of optical imaging, due to the limitation of factors such as the structure of an imaging device, the minimum pixel size and the like, a large field of view and high resolution are generally a pair of performance indexes which are difficult to be considered. The traditional solution idea is to adopt schemes such as multi-camera array imaging, single-camera motion imaging and single-camera scanning imaging, and combine multi-camera distribution information, single-camera motion information or scanner motion information to realize regional acquisition and high-resolution imaging of a large-scale scene. The multi-camera array can increase the cost and the arrangement space of the system, and the fixed configuration form of the multi-camera array can also cause the system to acquire a large amount of redundant information in partial application occasions, so that the system is lack of flexibility and adaptability; the single-camera movement requires the introduction of a two-dimensional driving mechanism to realize the pose adjustment of the sensor, the complexity and the rotational inertia of the movement structure are increased, and the dynamic property and the response capability of the system are reduced. In contrast, the single-camera scanning imaging method only requires the motion of part of the optical elements, namely, the direction of the imaging visual axis of the camera can be effectively changed, so that the camera can sequentially acquire high-resolution image information of a designated area, and the method has good imaging flexibility, dynamic responsiveness and environmental adaptability.
The following prior studies propose several typical large field-of-view high resolution imaging methods:
in the prior art (high cloud, etc., an infinite rotation type large-field scanning imaging system and a control system, publication number: CN110971788A, publication date: 2020, 4 and 7), it is proposed that two cameras are driven by a mechanical structure to perform conical rotation motion, so as to realize scanning imaging of a large-field area, and simultaneously, one central camera is used to perform staring imaging on the field center, and finally, the fields of view of three cameras are spliced to generate a complete large-field image. However, this method requires the cooperation of multiple cameras, and scanning imaging is implemented by the motion of the cameras themselves, which inevitably increases the complexity and implementation cost of the imaging system, and affects the service cycle and dynamic response characteristics thereof.
In the prior art (Liukai, etc., a common-caliber double-channel infrared scanning imaging optical system, applied optics, 2012,33(2): 395-. However, the reflective scanning mirror generally has a large physical size and a large moment of inertia, and the imaging boresight direction is sensitive to mechanical errors, so that the reflective scanning mirror is difficult to be applied to many applications with high compactness requirements or limited space conditions.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an image space scanning imaging method based on an achromatic cascade prism.
The purpose of the invention can be realized by the following technical scheme:
an image space scanning imaging method based on an achromatic cascade prism is characterized in that an imaging system comprises an optical lens group, an achromatic cascade prism device and a camera which are sequentially arranged, wherein the optical lens group is used for expanding an imaging field range and projecting scattered light from a wide-range target scene onto a virtual primary image surface; the achromatic cascade prism device is used for changing the direction of an imaging visual axis of the camera so as to capture the imaging light of the primary image surface sequentially and regionally; the camera is used for recording image information under different imaging visual angles and generating a high-resolution regional image sequence; the image space scanning imaging method comprises the following steps:
s1, constructing a parameter matching and model system: combining the field angle and the resolution of the imaging system, determining the optical parameters and the structural parameters of the camera, the achromatic cascade prism and the optical lens group, and constructing an image space scanning imaging model system and a working coordinate system thereof according to the relative pose relationship of the camera, the achromatic cascade prism and the optical lens group;
s2, establishing a primary imaging projection model: according to the structural parameters and the arrangement parameters of the optical lens group, describing the process of multiple refractions of the optical lens group to light rays by using geometrical optics, and establishing an imaging projection model and a space mapping relation of the light rays which are incident to the lens group from an object and then emergent to a primary image surface;
s3, achromatic cascade prism scanning motion planning: determining a subregion division strategy of image space scanning secondary imaging by combining the coverage range of the primary image surface and the transient field range of the camera, and calculating a visual axis pointing angle required by the camera for imaging each subregion, thereby designing a corner change rule of the cascade prism in the visual axis adjusting process;
s4, image area image acquisition correction: when the achromatic cascade prism rotates to the appointed corner positions respectively, triggering the camera to perform secondary imaging on the image space subregion under the pointing direction of the visual axis of the camera, and correcting the image space subregion image into an object space subregion image by combining a reverse ray tracing model and a primary imaging projection model;
s5, registering the object region image sequence: the method comprises the steps that the change relation of the direction of adjacent imaging visual axes is utilized, the overlapped area of two object space sub-area images is positioned in advance, a certain number of characteristic point pairs are extracted and matched from the overlapped area, the perspective transformation matrix of the adjacent images is estimated, and the rough and fine two-stage registration relation of an image sequence is established;
s6, generating a large-view-field high-resolution image: and based on the object space subregion image sequence after accurate registration, processing the intensity information of the adjacent subregion images in the overlapping region by using a linear fusion strategy, and finally splicing to obtain a large-field-of-view high-resolution image formed by all the subregion images.
Further, in step S1, a working coordinate system O-XYZ of the image space scanning imaging model system is established according to the right-hand rule, the origin O is fixed at the optical center position of the camera, the Z axis coincides with the optical axis direction of the camera, the X axis and the Y axis are both orthogonal to the Z axis, and the X axis and the Y axis respectively correspond to the row scanning direction and the column scanning direction of the image sensor in the camera.
Further, in step S2, the projection model from the object to the primary image plane is described by using the vector refraction law to describe the propagation process of the object light sequentially passing through each element in the optical lens group
Figure BDA0002950813950000039
Expressed as:
Figure BDA0002950813950000031
wherein the symbols
Figure BDA00029508139500000310
Representing a process of refracting the projection light propagating along the left vector with the right vector as a normal vector; s obj The object-side light ray vector incident on the optical lens group,
Figure BDA0002950813950000032
the light vectors are projected to a primary image surface after being refracted for multiple times; n is 1 ,n 2 ,...,n 2k Representing a normal vector of a lens surface through which primary imaging light passes in sequence; k denotes the number of lens elements included in the optical lens group.
Further, the step S3 specifically includes:
s31, calculating the horizontal angle covered by the primary image surface according to the structural parameters and the optical parameters of the optical lens group
Figure BDA0002950813950000033
And vertical angle
Figure BDA0002950813950000034
Then the horizontal angle with the transient visual field of the camera
Figure BDA0002950813950000035
And vertical angle
Figure BDA0002950813950000036
By comparison, dividing the sub-region of the image space scanning secondary imaging into n v ×n h Array, wherein n v And n h Respectively the number of rows and columns, ensuring that the system passes through n v ×n h The sub-regional scanning imaging can collect all information on a primary image surface, and a certain size of overlapping region exists between all adjacent sub-regions;
s32, estimating an imaging boresight orientation corresponding to the center of each sub-region in combination with the sub-region division condition of image scanning secondary imaging, which is described by a pitch angle Φ and an azimuth angle Θ, and expressed as:
Figure BDA0002950813950000037
where i and j are the row number and column number of the sub-region, respectively, and atan2 is the value range (-pi, pi)]Of the arctangent function, λ v And λ h Respectively representing the coincidence coefficients of the adjacent sub-regions in the vertical direction and the horizontal direction;
s33, aiming at the pitch angle and the azimuth angle of each image space scanning sub-region center, solving the corresponding rotation angle of the achromatic cascade prism by using a two-step method to enable the camera imaging visual axis to point to the sub-region center, wherein the analytic form is as follows:
Figure BDA0002950813950000038
wherein theta is 1 And theta 2 Angle of rotation, theta, of each of the two prisms d Is the difference between the rotation angles of two prisms, b 1 And c 1 Are intermediate variables, respectively expressed as:
Figure BDA0002950813950000041
wherein alpha and n are respectively the wedge angle and the equivalent refractive index of the achromatic cascade prism;
s34, a series of corner data of the achromatic cascade prism is given, and a rotation motion rule of the achromatic cascade prism is designed on the basis of the principle that the motion time of the prism is shortest, so that a corner sequence of the cascade prism arriving successively is determined.
Further, the step S4 specifically includes:
s41, when the achromatic cascade prism rotates to an expected group of corner positions each time, triggering the camera to capture image information of the corresponding image sub-area under the pointing direction of the current imaging visual axis through software;
s42, determining secondary imaging light ray vector according to the actually collected image space subregion image by the reverse light ray tracing method
Figure BDA0002950813950000042
And the emergent ray of the achromatic cascade prism
Figure BDA0002950813950000043
Can make the corresponding incident light
Figure BDA0002950813950000044
Expressed as:
Figure BDA0002950813950000045
wherein
Figure BDA0002950813950000046
The normal vector of the prism refraction surface is expressed in the sequence of the reverse tracking light from the camera imaging plane;
s43, in the actual imaging process, the light reaches the primary image surface and then directly enters the achromatic cascade prism, so that the primary imaging projection lightCan be determined from the incident rays of the achromatic cascade prism, i.e.
Figure BDA0002950813950000047
And then the projection model of one-time imaging is utilized to calculate the corresponding object space projection light ray vector s obj Expressed as:
Figure BDA0002950813950000048
wherein
Figure BDA0002950813950000049
Representing a one-time imaging projection model
Figure BDA00029508139500000410
The reverse process of (2);
and S44, acquiring all secondary imaging light ray vectors from the image side subregion image collected by the camera, and substituting the vectors into the steps S42 and S43 to determine the corresponding object side projection light rays, so that the distorted image side subregion image is restored to an undistorted object side subregion image.
Further, the step S5 specifically includes:
s51, combining the deflection characteristic of achromatic cascade prism to camera imaging visual axis direction, establishing secondary imaging light ray vector
Figure BDA0002950813950000051
Reverse solving primary imaging light vector
Figure BDA0002950813950000052
Is expressed as:
R(Φ,Θ)=A(Θ)+[I-A(Θ)]·cosΦ+B(Θ)·sinΦ
where I is a third order identity matrix, both matrices A and B are related to the azimuth angle Θ and are represented as:
Figure BDA0002950813950000053
s52, determining the relative position of one image in the other image according to the approximate transformation matrix between the images of the adjacent subregions on the image side, thereby determining the boundary of the overlapped region of the two images; the sub-area image I in the ith row and the jth column ij And the adjacent sub-area image I of the ith row and the (j + 1) th column i(j+1) As an example, image I i(j+1) Is in the image I ij Is expressed as:
Figure BDA0002950813950000054
wherein is p i(j+1) Representing an image I i(j+1) Homogeneous image coordinates of any point on the boundary,
Figure BDA0002950813950000055
to convert it to image I ij The coordinates of subsequent homogeneous images under a coordinate system, wherein omega is a scale factor; in picture I ij Comparing adjacent images I under the coordinate system of ij And I i(j+1) The boundary position of the two can be determined, namely the boundary of the coincidence zone of the two is determined
Figure BDA0002950813950000056
S53, using the primary imaging projection model to make the boundary E of the overlapping area of the images of the adjacent subregions in the image space img Overlapped area boundary E mapped into object space adjacent subarea images obj Expressed as:
Figure BDA0002950813950000057
wherein E obj Coarse registration constraints of the images of the object-side adjacent subregions can be provided;
s54, extracting a certain number of image features in the overlapping area of the images of the adjacent sub-areas of the object space, and establishing a feature matching relationship between the two images, thereby estimating a projection transformation matrix M of the two images, wherein the fine registration relationship is expressed as:
Figure BDA0002950813950000058
wherein K is i(j+1) And
Figure BDA0002950813950000059
respectively representing the homogeneous image coordinates of the object subregion images of the ith row and the jth +1 column and the homogeneous image coordinates after the homogeneous image coordinates are registered to the object subregion images of the ith row and the jth column.
Further, in step S6, for any two object-side adjacent subregion images, the intensity information in the overlapped region is processed by using a linear fusion strategy, that is, the distance from a certain point to the centers of the two images is taken as the weight of the fusion intensity, and is expressed as:
Figure BDA0002950813950000061
where (x, y) is the image coordinate of a particular image point within the overlap region, D ij And D i(j+1) Respectively represent two images of adjacent sub-areas of the object space,
Figure BDA0002950813950000062
representing the image after the two have been fused, omega ij And ω i(j+1) The value ranges are [0,1 ] respectively along with the Euclidean distance between the image point and the centers of the two images]。
Furthermore, the achromatic cascade prism device comprises a pair of achromatic prisms and respective rotary driving mechanisms, and the two achromatic prisms keep optical axes aligned with each other and adopt an arrangement form of plane opposition or wedge surface opposition.
Furthermore, the rotary driving mechanism adopts a torque motor direct drive or gear drive, synchronous belt drive or worm and gear drive mode.
Furthermore, the camera, the achromatic cascade prism and the optical lens group all satisfy the coaxial arrangement relationship, and the imaging target surface of the camera is parallel to the plane sides of the two achromatic prisms.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the achromatic cascade prism and the optical lens group are introduced in front of the single camera, and the object space view field expansion function of the optical lens group and the image space scanning imaging function of the achromatic cascade prism are combined, so that large-range, high-efficiency and wide-spectrum imaging is realized on the basis of ensuring the compactness of the overall structure and the flexibility of moving parts.
2. The invention provides an automatic division strategy of image space scanning imaging sub-areas, and the corresponding rotation angle of the achromatic cascade prism is quickly acquired by using a reverse analysis method, so that the scanning motion of the imaging visual axis of the camera is controlled, the image information of each image space sub-area is captured in the shortest time, and the real-time performance and the adaptability of the whole imaging process are improved.
3. The invention utilizes the vector refraction law and the reverse ray tracing method to establish the vector mapping relation of the object space projection ray, the primary imaging ray and the secondary imaging ray, can correct the actually collected image space scanning image into an undistorted object space image, and overcomes the problem of image degradation caused by introducing a refraction optical element.
4. The invention provides a coarse and fine two-stage image registration method facing cascaded prism scanning imaging, which comprises the steps of firstly positioning the overlapping area of images of adjacent subregions in advance to realize coarse registration, and then estimating a transformation matrix from an image characteristic matching relation to realize fine registration, so that the accuracy and the reliability of the image space scanning image sequence splicing process can be fully ensured.
5. The invention restrains the information fusion process of the image space scanning image sequence in the pre-positioned overlapping area, does not need to cover the whole range of each sub-area image, can greatly reduce the time complexity of fusion operation, and improves the generation efficiency of the large-view-field high-resolution image.
Drawings
Fig. 1 is a schematic composition diagram of an imaging system.
FIG. 2 is a flow chart of an image space scanning imaging method based on an achromatic cascade prism.
Figure 3 is a schematic diagram of the light deflection achieved by an achromatic cascaded prism.
FIG. 4 is a schematic diagram of an image scanning system acquiring an image at a single image plane.
FIG. 5 is a schematic diagram of an image side scanning secondary imaging distortion generation and correction process.
Reference numerals: 1-camera, 21-first achromatic prism, 22-second achromatic prism, 3-optical lens group.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides an image space scanning imaging method based on an achromatic cascade prism.
As shown in fig. 1, the imaging system includes an optical lens group 3, an achromatic cascade prism apparatus, and a camera 1, which are arranged in this order.
The camera 1 includes an image sensor and a lens, parameters such as a target surface size and a pixel size of the image sensor, a focal length and a depth of field of the lens are determined by a range of a target scene, and a detection band of the image sensor is determined by an attribute of the target scene and is a visible light band or an infrared light band.
The achromatic cascade prism includes a first achromatic prism 21 and a second achromatic prism 22, each of which is composed of a combination of elements made of two different materials (e.g., germanium and silicon, lithium fluoride, zinc sulfide, etc.). The two achromatic prisms keep optical axes aligned with each other, adopt an arrangement form of plane opposite or wedge surface opposite, and simultaneously keep the plane sides of the two prisms parallel to the sensor target surface of the camera 1; the two achromatic prisms are fixed on respective supporting structures in an optical glue bonding mode and are driven by an independent rotating mechanism to realize rotating motion around the optical axis direction; the rotating mechanism adopts the modes of torque motor direct drive or gear drive, synchronous belt drive, worm and gear drive and the like.
The optical lens group 3 comprises a plurality of lens elements in different forms, all the elements meet the coaxial relationship with the camera 1 and the achromatic cascade prism, the optical parameters and the arrangement scheme are designed and matched according to the range requirement of an imaging view field, and the film coating treatment is carried out on the detection waveband of the camera 1 so as to increase the light transmittance.
The imaging system of the embodiment introduces the achromatic cascade prism and the optical lens group in front of the camera, can capture large-range target scene information through the visual field expansion and the primary imaging action of the optical lens group, projects the target scene information onto a primary image surface, collects all information on the primary image surface in different areas through the visual axis adjustment and the secondary imaging action of the achromatic cascade prism, and finally splices to obtain a large-visual-field high-resolution image. Compared with the existing multi-camera imaging system and single-camera imaging system, the image space scanning imaging system of the embodiment does not need the camera body to move in any form, does not introduce a reflecting element sensitive to error disturbance, and can simultaneously meet the performance requirements of structural compactness, imaging field range, image resolution, imaging efficiency, flexibility and the like.
As shown in fig. 2 to 5, the image space scanning imaging method includes the specific steps of:
step S1, parameter matching and model system construction
Determining optical parameters and structural parameters of the camera 1, the first achromatic prism 21, the second achromatic prism 22 and the optical lens group 3 according to requirements of the field angle, the resolution and the like of the imaging system, and constructing an image space scanning imaging model system based on the achromatic cascade prism according to the coaxial arrangement relationship of the three;
and establishing a working coordinate system O-XYZ of the image space scanning imaging model system according to a right-hand rule, wherein an origin O is fixed at the optical center position of the camera 1, a Z axis is overlapped with the optical axis direction of the camera 1, and an X axis and a Y axis are both orthogonal to the Z axis and respectively correspond to the row scanning direction and the column scanning direction of the image sensor.
Step S2, establishing a primary imaging projection model
According to the structural parameters and the arrangement parameters of the optical lens group, the object light rays are described to sequentially pass through the optical lens by utilizing the vector refraction lawThe propagation process of each element in the group, the process of multiple refractions of the light generated by the optical lens group is described by using geometrical optics, and an imaging projection model of the light which is incident from an object space to the lens group and then emergent to a primary image surface is established
Figure BDA0002950813950000081
Expressed as:
Figure BDA0002950813950000082
wherein the symbols
Figure BDA0002950813950000088
Representing a process of refracting the projection light propagating along the left vector with the right vector as a normal vector; s obj The object-side light ray vector incident on the optical lens group,
Figure BDA0002950813950000083
the light vectors are projected to a primary image plane after being refracted for multiple times; n is a radical of an alkyl radical 1 ,n 2 ,...,n 2k Representing a normal vector of a lens surface through which primary imaging light passes in sequence; k denotes the number of lens elements included in the optical lens group.
Step S3, cascaded prism scanning motion planning
Step S31, calculating the horizontal angle covered by the primary image plane according to the structural parameters and the optical parameters of the optical lens group
Figure BDA0002950813950000084
And vertical angle
Figure BDA0002950813950000085
Then the horizontal angle with the transient visual field of the camera
Figure BDA0002950813950000086
And vertical angle
Figure BDA0002950813950000087
And comparing, dividing the sub-regions of the image space scanning secondary imaging into 4 multiplied by 4 arrays, ensuring that the system can collect all information on the primary image surface through the sub-region scanning imaging, and ensuring that a certain size of overlapping region exists between all adjacent sub-regions.
Step S32, estimating an imaging boresight orientation corresponding to the center of each sub-region according to the sub-region division condition of image scanning secondary imaging, where the pitch angle Φ and the azimuth angle Θ are expressed as:
Figure BDA0002950813950000091
where i and j are the row number and column number of the sub-region, respectively, and atan2 is the value range (-pi, pi)]Arctangent function of, n v 4 and n h 4 denotes the number of rows and columns, respectively, of the subdivision into subregions, λ v 0.15 and λ h The vertical and horizontal coincidence coefficients of adjacent subregions are respectively represented by 0.15.
Step S33, aiming at the pitch angle and the azimuth angle of each image space scanning sub-region center, solving the corresponding cascade prism rotation angle by using a two-step method to make the camera imaging visual axis point to the sub-region center, wherein the analytic form is as follows:
Figure BDA0002950813950000092
wherein theta is 1 And theta 2 Angle of rotation, θ, of each of the two prisms d Is the difference between the rotation angles of two prisms, b 1 And c 1 Are intermediate variables, respectively expressed as:
Figure BDA0002950813950000093
the wedge angle of the achromatic cascade prism in this embodiment is 5 °, and the equivalent refractive index is 3.
Step S34, 4 x 4 groups of corner data of the achromatic cascade prism are given, and the principle of the shortest prism movement time is taken as the setThe rotation motion rule is counted, and the rotation angle sequence (theta) of the cascade prism arriving successively is determined 1 } ij And { theta [ ] 2 } ij
Step S4, image area image capture correction
Step S41, controlling the achromatic cascade prism to rotate to the expected rotation angle position [ theta ] 1 } ij And { theta [ ] 2 } ij And triggering the camera to capture the image information of the corresponding image side sub-area by the software when the current imaging visual axis points downwards.
Step S42, determining secondary imaging light ray vector according to the actually collected image space subregion image by a reverse light ray tracing method
Figure BDA0002950813950000094
Outgoing ray vector incorporating achromatic cascaded prisms
Figure BDA0002950813950000095
Determine its corresponding incident ray
Figure BDA0002950813950000096
Expressed as:
Figure BDA0002950813950000101
wherein
Figure BDA0002950813950000102
And the normal vector of the prism refraction surface is shown, wherein the reverse tracking light rays sequentially pass through the normal vector from the camera imaging plane.
Step S43, because the light directly enters the achromatic cascade prism after reaching the primary image surface in the actual imaging process, the primary imaging projection light can be determined according to the incident light of the cascade prism, that is to say
Figure BDA0002950813950000103
Calculating the corresponding object space projection light ray vector s by using the primary imaging projection model obj Expressed as:
Figure BDA0002950813950000104
wherein
Figure BDA0002950813950000105
Representing a one-time imaging projection model
Figure BDA00029508139500001013
The reverse process of (2).
Step S44, acquiring all secondary imaging light ray vectors from the image side subregion image collected by the camera, and substituting the vectors into steps S42 and S43 to determine the corresponding object side projection light rays, so as to obtain the distorted image side subregion image { I } img } ij Restored to undistorted object space subregion image { I obj } ij
Step S5, object region image sequence registration
Step S51, combining the deflection characteristic of the achromatic cascade prism to the camera imaging visual axis direction, establishing secondary imaging light ray vector
Figure BDA0002950813950000106
Reverse solving primary imaging light vector
Figure BDA0002950813950000107
Is expressed as:
R(Φ,Θ)=A(Θ)+[I-A(Θ)]·cosΦ+B(Θ)·sinΦ
where I is a third order identity matrix, both matrices A and B are related to the azimuth angle Θ and are represented as:
Figure BDA0002950813950000108
step S52, determining the relative position of one image in the other image according to the approximate transformation matrix between the adjacent subarea images on the image side, thereby determining the boundary of the overlapping area of the two images; in the ith row and the thj columns of sub-area images I ij And the adjacent sub-area image I of the ith row and the (j + 1) th column i(j+1) As an example, image I i(j+1) Is in the image I ij Can be expressed as:
Figure BDA0002950813950000109
wherein is p i(j+1) Representing an image I i(j+1) Homogeneous image coordinates of any point on the boundary,
Figure BDA00029508139500001010
for converting it into an image I ij The coordinates of subsequent homogeneous images under a coordinate system, wherein omega is a scale factor; in picture I ij In a coordinate system of (1) comparing adjacent images I ij And I i(j+1) The boundary position of the two can be determined, namely the boundary of the overlapped area of the two is determined
Figure BDA00029508139500001011
Step S53, overlapping area boundary E of the sub-area images adjacent to the image space by using the primary imaging projection model img Overlapped area boundary E mapped into object space adjacent subarea images obj Expressed as:
Figure BDA00029508139500001012
wherein E obj A coarse registration constraint for the object-side neighboring subregion images can be provided.
Step S54, in the overlapping area of the images of the adjacent sub-areas of the object space, respectively extracting image features with the quantity not less than 4 from the two images by using a Scale Invariant Feature Transform (SIFT) algorithm, and establishing an accurate matching relation between the features by combining a fast approximate nearest neighbor matching algorithm and a random sampling consistency method, thereby estimating a projection transformation matrix M of the two images, wherein the accurate registration relation is expressed as:
Figure BDA0002950813950000111
wherein K i(j+1) And
Figure BDA0002950813950000112
respectively representing the homogeneous image coordinates of the object subregion images of the ith row and the jth +1 column and the homogeneous image coordinates after the homogeneous image coordinates are registered to the object subregion images of the ith row and the jth column.
Step S6, generating high resolution image with large visual field
For the object subregion image sequence after accurate registration, processing the intensity information of the adjacent subregion images in the overlapping region by using a linear fusion strategy, namely taking the distance from the image point to the centers of the two images as weight, calculating the intensity value of the fused image at the position, and expressing the intensity value as follows:
Figure BDA0002950813950000113
wherein (x, y) is the image coordinate of any image point in the overlapping region, D ij And D i(j+1) Respectively representing intensity images of two adjacent sub-areas of the object space,
Figure BDA0002950813950000114
representing the image after the fusion of the two, omega ij And ω i(j+1) The value ranges are [0,1 ] respectively along with the Euclidean distance between the image point and the centers of the two images]。
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the above teachings. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. An image space scanning imaging method based on an achromatic cascade prism is characterized in that an imaging system comprises an optical lens group, an achromatic cascade prism device and a camera which are sequentially arranged, wherein the optical lens group is used for expanding an imaging field range and projecting scattered light from a wide-range target scene onto a virtual primary image surface; the achromatic cascade prism device is used for changing the direction of an imaging visual axis of the camera so as to capture the imaging light of the primary image surface sequentially and regionally; the camera is used for recording image information under different imaging visual angles and generating a high-resolution regional image sequence; the image space scanning imaging method comprises the following steps:
s1, constructing a parameter matching and model system: combining the field angle and the resolution of the imaging system, determining the optical parameters and the structural parameters of the camera, the achromatic cascade prism and the optical lens group, and constructing an image space scanning imaging model system and a working coordinate system thereof according to the relative pose relationship of the camera, the achromatic cascade prism and the optical lens group;
s2, establishing a primary imaging projection model: according to the structural parameters and the arrangement parameters of the optical lens group, describing the process of multiple refractions of the optical lens group to light rays by using geometrical optics, and establishing an imaging projection model and a space mapping relation of the light rays which are incident to the lens group from an object and then emergent to a primary image surface;
s3, achromatic cascade prism scanning motion planning: determining a subregion division strategy of image space scanning secondary imaging by combining the coverage range of the primary image surface and the transient field range of the camera, and calculating a visual axis pointing angle required by the camera for imaging each subregion, thereby designing a corner change rule of the cascade prism in the visual axis adjusting process;
s4, image area image acquisition correction: when the achromatic cascade prism rotates to the appointed corner positions respectively, triggering the camera to perform secondary imaging on the image space subregion under the pointing direction of the visual axis of the camera, and correcting the image space subregion image into an object space subregion image by combining a reverse ray tracing model and a primary imaging projection model;
s5, registering the object region image sequence: the method comprises the steps that the change relation of the direction of adjacent imaging visual axes is utilized, the overlapped area of two object space sub-area images is positioned in advance, a certain number of characteristic point pairs are extracted and matched from the overlapped area, the perspective transformation matrix of the adjacent images is estimated, and the rough and fine two-stage registration relation of an image sequence is established;
s6, generating a large-view-field high-resolution image: and based on the object space subregion image sequence after accurate registration, processing the intensity information of the adjacent subregion images in the overlapped region by using a linear fusion strategy, and finally splicing to obtain a large-field-of-view high-resolution image formed by all the subregion images.
2. The method as claimed in claim 1, wherein in step S1, a working coordinate system O-XYZ of the image scanning and imaging model system is established according to right-hand rule, an origin O is fixed at the optical center of the camera, a Z axis coincides with the optical axis of the camera, an X axis and a Y axis are both orthogonal to the Z axis, and the X axis and the Y axis respectively correspond to the row scanning direction and the column scanning direction of the image sensor in the camera.
3. The image space scanning imaging method based on the achromatic cascade prism as claimed in claim 1, wherein in step S2, the vector refraction law is used to describe the projection model of the object light from the object space to the primary image surface through the propagation process of each element in the optical lens group
Figure FDA0003724085590000021
Expressed as:
Figure FDA0003724085590000022
wherein the symbols
Figure FDA0003724085590000023
Representing a process of refracting the projection light propagating along the left vector with the right vector as a normal vector; s is obj As the object light vector incident on the optical lens group,
Figure FDA0003724085590000024
The light vectors are projected to a primary image surface after being refracted for multiple times; n is 1 ,n 2 ,...,n k Representing a normal vector of a lens surface through which primary imaging light passes in sequence; k denotes the number of lens elements included in the optical lens group.
4. The image space scanning imaging method based on the achromatic cascade prism as claimed in claim 1, wherein said step S3 specifically comprises:
s31, calculating the horizontal angle covered by the primary image surface according to the structural parameters and the optical parameters of the optical lens group
Figure FDA0003724085590000025
And vertical angle
Figure FDA0003724085590000026
Then the horizontal angle with the transient visual field of the camera
Figure FDA0003724085590000027
And vertical angle
Figure FDA0003724085590000028
By comparison, dividing the sub-region of the image space scanning secondary imaging into n v ×n h Array, wherein n v And n h The number of rows and columns respectively ensure that the system passes through n v ×n h The sub-regional scanning imaging can collect all information on a primary image surface, and a certain size of overlapping region exists between all adjacent sub-regions;
s32, estimating the imaging visual axis direction corresponding to the center of each sub-region according to the sub-region division condition of image space scanning secondary imaging, which is described by a pitch angle phi and an azimuth angle theta and expressed as:
Figure FDA0003724085590000029
where i and j are the row number and column number of the sub-region, respectively, and atan2 is the value range (-pi, pi)]Of the arctangent function, λ v And λ h Respectively representing the coincidence coefficients of the adjacent subregions in the vertical direction and the horizontal direction;
s33, aiming at the pitch angle and the azimuth angle of each image space scanning sub-region center, solving the corresponding rotation angle of the achromatic cascade prism by using a two-step method to enable the camera imaging visual axis to point to the sub-region center, wherein the analytic form is as follows:
Figure FDA00037240855900000210
wherein theta is 1 And theta 2 Angle of rotation, theta, of each of the two prisms d Is the difference between the rotation angles of two prisms, b 1 And c 1 Are intermediate variables, respectively expressed as:
Figure FDA0003724085590000031
wherein alpha and n are the wedge angle and the equivalent refractive index of the achromatic cascade prism respectively;
s34, a series of corner data of the achromatic cascade prism is given, the rotation motion rule of the achromatic cascade prism is designed on the basis of the principle that the prism motion time is shortest, and therefore the corner sequence of the cascade prism arriving successively is determined.
5. The image space scanning imaging method based on the achromatic cascade prism as claimed in claim 1, wherein said step S4 specifically comprises:
s41, when the achromatic cascade prism rotates to an expected group of corner positions each time, triggering the camera to capture image information of the corresponding image sub-area under the pointing direction of the current imaging visual axis through software;
s42, determining the image according to the actually collected image space subregion image by the reverse ray tracing methodDetermining secondary imaging ray vector
Figure FDA0003724085590000032
And the emergent ray of the achromatic cascade prism
Figure FDA0003724085590000033
Can make the corresponding incident light
Figure FDA0003724085590000034
Expressed as:
Figure FDA0003724085590000035
wherein
Figure FDA0003724085590000036
The normal vector of the prism refraction surface is expressed in the sequence of the reverse tracking light from the camera imaging plane;
s43, in the actual imaging process, the light directly enters the achromatic cascade prism after reaching the primary image surface, so the primary imaging projection light can be determined according to the incident light of the achromatic cascade prism, that is
Figure FDA0003724085590000037
And then the projection model of one-time imaging is utilized to calculate the corresponding object space projection light ray vector s obj Expressed as:
Figure FDA0003724085590000038
wherein
Figure FDA0003724085590000039
Representing a one-time imaging projection model
Figure FDA00037240855900000310
The reverse process of (2);
and S44, acquiring all secondary imaging light ray vectors from the image side subregion image collected by the camera, and substituting the vectors into the steps S42 and S43 to determine the corresponding object side projection light rays, so that the distorted image side subregion image is restored to an undistorted object side subregion image.
6. The image space scanning imaging method based on the achromatic cascade prism as claimed in claim 1, wherein said step S5 specifically comprises:
s51, combining the deflection characteristic of the achromatic cascade prism to the camera imaging visual axis direction, establishing secondary imaging light ray vector
Figure FDA0003724085590000041
Reverse solving primary imaging light vector
Figure FDA0003724085590000042
Is expressed as:
R(Φ,Θ)=A(Θ)+[I-A(Θ)]·cosΦ+B(Θ)·sinΦ
where I is a third order identity matrix, both matrices A and B are related to the azimuth angle Θ and are represented as:
Figure FDA0003724085590000043
s52, determining the relative position of one image in the other image according to the approximate transformation matrix between the images of the adjacent subregions on the image side, thereby determining the boundary of the overlapped region of the two images; the sub-area image I in the ith row and the jth column ij And the adjacent sub-area image I of the ith row and the (j + 1) th column i(j+1) As an example, image I i(j+1) Is in the image I ij Is expressed as:
Figure FDA0003724085590000044
wherein is p i(j+1) Representing an image I i(j+1) Homogeneous image coordinates of any point on the boundary,
Figure FDA0003724085590000045
to convert it to image I ij The coordinates of subsequent homogeneous images under a coordinate system, wherein omega is a scale factor; in picture I ij Comparing adjacent images I under the coordinate system of ij And I i(j+1) The boundary position of the two can be determined, namely the boundary of the overlapped area of the two is determined
Figure FDA0003724085590000046
S53, using the primary imaging projection model to make the boundary E of the overlapping area of the images of the adjacent subregions in the image space img Overlapped area boundary E mapped into object space adjacent subarea images obj Expressed as:
Figure FDA0003724085590000047
wherein E obj Coarse registration constraints for images of object-side neighboring subregions can be provided;
s54, extracting a certain number of image features in the overlapping area of the images of the adjacent sub-areas of the object space, and establishing a feature matching relationship between the two images, thereby estimating a projection transformation matrix M of the two images, wherein the fine registration relationship is expressed as:
Figure FDA0003724085590000048
wherein K i(j+1) And
Figure FDA0003724085590000049
representing the homogeneous image coordinates of the object subregion images of the ith row and the jth +1 column and the coordinates registered after the object subregion images of the ith row and the jth columnHomogeneous image coordinates.
7. The image space scanning imaging method based on achromatic cascade prism as claimed in claim 1, wherein in said step S6, for any two images of object space adjacent subregions, the intensity information in the overlapped region is processed by using a linear fusion strategy, i.e. the distance from a certain point to the center of two images is taken as the weight of the fusion intensity, and is expressed as:
Figure FDA0003724085590000051
where (x, y) is the image coordinate of a particular image point in the overlapping region, D ij And D i(j+1) Respectively represent two images of adjacent subregions of the object space,
Figure FDA0003724085590000052
representing the image after the two have been fused, omega ij And ω i(j+1) The value ranges are [0,1 ] respectively along with the Euclidean distance between the image point and the centers of the two images]。
8. The image space scanning imaging method based on the achromatic cascade prism as claimed in claim 1, wherein the achromatic cascade prism device comprises a pair of achromatic prisms and respective rotary driving mechanisms, and the two achromatic prisms maintain the optical axes aligned with each other and adopt an arrangement form of plane-to-plane or wedge-to-wedge.
9. The image space scanning imaging method based on the achromatic cascade prism as set forth in claim 8, wherein the rotary driving mechanism adopts a torque motor direct drive or gear drive, synchronous belt drive or worm and gear drive mode.
10. The image space scanning imaging method based on the achromatic cascade prism as claimed in claim 8, wherein the camera, the achromatic cascade prism and the optical lens group all satisfy a coaxial arrangement relationship, and an imaging target surface of the camera and the plane sides of the two achromatic prisms are parallel to each other.
CN202110206239.4A 2021-02-24 2021-02-24 Image space scanning imaging method based on achromatic cascade prism Active CN113156641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110206239.4A CN113156641B (en) 2021-02-24 2021-02-24 Image space scanning imaging method based on achromatic cascade prism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110206239.4A CN113156641B (en) 2021-02-24 2021-02-24 Image space scanning imaging method based on achromatic cascade prism

Publications (2)

Publication Number Publication Date
CN113156641A CN113156641A (en) 2021-07-23
CN113156641B true CN113156641B (en) 2022-09-16

Family

ID=76883576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110206239.4A Active CN113156641B (en) 2021-02-24 2021-02-24 Image space scanning imaging method based on achromatic cascade prism

Country Status (1)

Country Link
CN (1) CN113156641B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100839871B1 (en) * 2006-12-28 2008-06-19 한국과학기술원 Active vision system for variable view imaging
CN107272015A (en) * 2017-07-05 2017-10-20 同济大学 High-precision vision guides laser tracking
CN109597275A (en) * 2018-11-29 2019-04-09 同济大学 A kind of axial Distributed Three-dimensional imaging method based on double-wedge prism
CN109819235A (en) * 2018-12-18 2019-05-28 同济大学 A kind of axial distributed awareness integrated imaging method having following function
CN111311688A (en) * 2020-01-22 2020-06-19 同济大学 Calibration method based on dual-sensor variable visual axis monitoring device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110243283B (en) * 2019-05-30 2021-03-26 同济大学 Visual measurement system and method with variable visual axis
CN112330794B (en) * 2020-10-09 2022-06-14 同济大学 Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100839871B1 (en) * 2006-12-28 2008-06-19 한국과학기술원 Active vision system for variable view imaging
CN107272015A (en) * 2017-07-05 2017-10-20 同济大学 High-precision vision guides laser tracking
CN109597275A (en) * 2018-11-29 2019-04-09 同济大学 A kind of axial Distributed Three-dimensional imaging method based on double-wedge prism
CN109819235A (en) * 2018-12-18 2019-05-28 同济大学 A kind of axial distributed awareness integrated imaging method having following function
CN111311688A (en) * 2020-01-22 2020-06-19 同济大学 Calibration method based on dual-sensor variable visual axis monitoring device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Calibration method of Risley-prism imaging System;Anhu Li;《Optics Communications》;20200315;第459卷;全文 *
Risley-prism-based visual tracing method for robot guidance;Anhu Li;《Journal of the Optical Society of America A》;20200331;第37卷(第4期);全文 *

Also Published As

Publication number Publication date
CN113156641A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN110243283B (en) Visual measurement system and method with variable visual axis
US6075235A (en) High-resolution polarization-sensitive imaging sensors
US20040066454A1 (en) Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
CN102445323B (en) Image processing-based heliostat fault diagnosis method and system
Krishnan et al. Range estimation from focus using a non-frontal imaging camera
CN101424551A (en) Active vision non-contact type servomechanism parameter measurement method and apparatus thereof
US20220172380A1 (en) Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system
CN112432705B (en) Multispectral imaging system and method based on dynamic visual axis adjustment principle
Wang et al. Super-resolution imaging and field of view extension using a single camera with Risley prisms
CN104539829A (en) Optical-mechanical structure based on infrared area array detector scanning imaging
CN102098442B (en) Method and system for calibrating non-overlap ratio of optical axis and visual axis of zoom camera
CN112802121A (en) Calibration method of monitoring camera
CN113156641B (en) Image space scanning imaging method based on achromatic cascade prism
Li et al. A cooperative camera surveillance method based on the principle of coarse-fine coupling boresight adjustment
Li et al. Calibration method of Risley-prism imaging system
CN204964030U (en) Opto mechanical structure based on infrared area array detector scanning imagery
US7949241B2 (en) Anamorphic focal array
CN102968801A (en) Moving target tracking method based on photoelectric mixing combination transformation correlation
CN113759543B (en) Method for realizing flexible foveal imaging based on rotating double-prism imaging system
CN114612574A (en) Vehicle-mounted panoramic aerial view camera panoramic aerial view calibration and conversion splicing method based on unmanned aerial vehicle
Firoozfam et al. A multi-camera conical imaging system for robust 3D motion estimation, positioning and mapping from UAVs
CN111311688A (en) Calibration method based on dual-sensor variable visual axis monitoring device
AU2018384334A1 (en) Method for detecting poor mounting state of module, and array
Spacek Omnidirectional catadioptric vision sensor with conical mirrors
Zhang et al. High Precision Monocular Plane Measurement for Large Field of View

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant