CN113556476B - Active non-vision field array imaging method based on multi-point illumination - Google Patents

Active non-vision field array imaging method based on multi-point illumination Download PDF

Info

Publication number
CN113556476B
CN113556476B CN202110814980.9A CN202110814980A CN113556476B CN 113556476 B CN113556476 B CN 113556476B CN 202110814980 A CN202110814980 A CN 202110814980A CN 113556476 B CN113556476 B CN 113556476B
Authority
CN
China
Prior art keywords
laser
illumination
imaging
point
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110814980.9A
Other languages
Chinese (zh)
Other versions
CN113556476A (en
Inventor
靳辰飞
田小芮
马俊锋
杨杰
唐勐
乔凯
史晓洁
张思琦
刘丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110814980.9A priority Critical patent/CN113556476B/en
Publication of CN113556476A publication Critical patent/CN113556476A/en
Application granted granted Critical
Publication of CN113556476B publication Critical patent/CN113556476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • G03B15/06Special arrangements of screening, diffusing, or reflecting devices, e.g. in studio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

An active non-vision field array imaging method based on multi-point illumination relates to the field of optical imaging. The invention solves the problems that the existing single-point active non-visual field array imaging system can only image hidden targets at specific positions and angles and has poor adaptability to imaging areas. The method is realized by an active non-visual field array imaging system based on multi-point illumination, and multi-point illumination is introduced by improving an active illumination mode, so that the imaging quality of the active non-visual field imaging system can be improved, the adaptability to an imaging area is enhanced, and the method is not limited to imaging hidden targets at specific positions and angles. The method mainly comprises the steps of firstly respectively reconstructing n groups of laser data collected by an array single-photon camera to obtain n reconstructed initial reconstructed images, and then fusing the plurality of initial reconstructed images through an image fusion method to obtain a final image reconstruction result. The method is mainly used for imaging the hidden target.

Description

Active non-vision field array imaging method based on multi-point illumination
Technical Field
The present invention relates to the field of optical imaging.
Background
In various complex application scenes such as urban battles, disaster relief, security protection anti-terrorism, unmanned driving and the like, objects such as walls, street corners, obstacles and the like often form hard blocking on light rays, under the condition, the visual field of the traditional imaging system is almost completely limited, targets at corners cannot be seen, and the direct imaging of the targets is almost impossible.
Therefore, with the rapid development of the field of unmanned driving in recent years, non-visual field imaging techniques specifically directed to the above-described situation have been produced based on a computational optical imaging method. Non-field of view imaging techniques are primarily used to reconstruct an image of a target by capturing scattered or reflected light from the ambient environment of the target ray, followed by computational imaging algorithms. A schematic diagram of a prior art active non-view imaging system is shown in fig. 1 below; the main components of the system are a narrow pulse laser, a middle interface, a target, and an array single photon camera. The main characteristic of the single-point active non-visual field imaging system is that the illumination point is only one and is fixed and unchangeable. The feasibility of this approach was experimentally verified, but there was a significant drawback. The whole system has high requirements on the space position, the three-dimensional angle, the material and the structure of a scene of a target, can only image a hidden target at a specific position and angle, and has poor adaptability to an imaging area. In practical applications, the non-visual field imaging system needs to have high adaptability to various imaging scenes. Therefore, the above problems need to be solved.
Disclosure of Invention
The invention aims to solve the problems that the existing single-point active non-vision field imaging system can only image hidden targets at specific positions and angles and has poor adaptability to an imaging area; the invention provides an active non-vision field array imaging method based on multi-point illumination.
The active non-visual field array imaging method based on the multipoint illumination is realized by an active non-visual field array imaging system based on the multipoint illumination, the active non-visual field array imaging system based on the multipoint illumination comprises a narrow pulse laser, a middle interface, an array single photon camera and a target object, the narrow pulse laser and the array single photon camera work synchronously, and the method comprises the following steps:
s1, placing a narrow pulse laser, an array single-photon camera and a target object on the same side of a middle interface, wherein the target object is not in a view field of the array single-photon camera, and the middle interface is divided into an imaging area and a non-imaging area;
s2, emitting n times of laser to the non-imaging area of the intermediate interface in a time-sharing manner by the narrow pulse laser, forming an illumination point on the non-imaging area of the intermediate interface by the laser emitted each time, forming n illumination points in a conformal manner, emitting m pulses by the narrow pulse laser at each illumination point, wherein m is an integer greater than or equal to 1; the positions of the n illumination points are different; n is an integer greater than or equal to 2; the n illumination points are respectively in one-to-one correspondence with n times of laser emitted by the narrow pulse laser;
an illumination point for illuminating the target object;
the propagation direction of the laser emitted by the narrow-pulse laser at each time is as follows: the narrow pulse laser emits laser to the non-imaging area of the middle interface, the laser is incident to a target object after being scattered for the first time by the non-imaging area of the middle interface, is incident to the imaging area of the intermediate interface after being scattered for the second time by the target object, and is incident to the array single photon camera after being scattered for the third time by the imaging area of the middle interface;
s3, carrying out space-time information acquisition on the laser after the third scattering corresponding to the laser emitted by the narrow pulse laser each time by the array single photon camera so as to obtain P i (ii) a i is an integer, i =1,2 \8230, 8230, n;
wherein, P i A matrix containing time and space information is included in the laser after the third scattering corresponding to the laser emitted by the narrow pulse laser for the ith time and collected by the array single photon camera;
s4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the ith laser emitted by the narrow pulse laser i
S5, utilizing P i And H i Carrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laser i
Wherein H i An optical field transmission matrix of an imaging system corresponding to the laser emitted by the narrow pulse laser for the ith time;
s6, performing p pairs on n initial reconstruction images 1 To rho n And carrying out image fusion so as to obtain a fused target reconstruction image I.
Preferably, S5, P is used i And H i Carrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laser i The implementation mode of the method is as follows:
ρ i =P i H i -1 (formula one).
Preferably, S4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the laser emitted by the narrow pulse laser for the ith time i The implementation mode comprises the following steps:
s41, constructing a point spread function H of a non-visual field imaging system corresponding to the ith laser emitted by the narrow pulse laser i (L i ,S,O);
H i (L i ,S,O)=KP PL (L i )ρ(L i )G(R i1 ,R i2 )G(R i2 ,R i3 )ρ(S)G(R i3 ,R i4 ) (formula two);
wherein the content of the first and second substances,
k is the responsivity of the array single photon camera;
ρ(L i ) Is the ith illumination point L on the middle interface i The scattering coefficient of the non-imaging area;
P PL (L i ) For the ith illumination point L i The light intensity of the ith laser emitted by the corresponding narrow pulse laser;
G(R i1 ,R i2 ) Is R i1 And R i2 Geometric scattering factor in between;
R i1 is an illumination point L on a non-imaging area of the narrow pulse laser and the middle interface i Distance between themVector quantity;
R i2 for laser light from the i-th illumination point L i Starting, and inputting a distance vector on the target object;
G(R i2 ,R i3 ) Is R i2 And R i3 Geometric scattering factor in between;
R i3 for the ith illumination point L i The emitted laser is scattered by the target object and then is incident to a distance vector on an imaging area of the intermediate surface;
G(R i3 ,R i4 ) Is R i3 And R i4 Geometric scattering factor in between;
R i4 for the ith illumination point L i The emitted laser is scattered to a distance vector on the array single photon camera through an imaging area of the intermediate surface;
rho (S) is the scattering coefficient of the imaging area on the medium interface; o is a target object plane;
s is the imaging area of the middle interface;
s42, point spread function H of non-visual field imaging system i (L i S, O) to obtain H i
Preferably, in S41, G (R) i1 ,R i2 ) The implementation mode of the method is as follows:
Figure BDA0003169697720000031
wherein n is w Is the median plane normal;
∠(R i2 ,n w ) Represents a vector R i2 And the normal n of the intermediate surface w The included angle therebetween.
Preferably, in S41, G (R) i2 ,R i3 ) The implementation mode of the method is as follows:
Figure BDA0003169697720000032
wherein the content of the first and second substances,
n o is the normal of the surface of the target object;
∠(R i2 ,n o ) Is a vector R i2 Normal n to the surface of the target object o The included angle therebetween;
∠(R i3 ,n o ) Is a vector R i3 Normal n to the surface of the target object o The included angle therebetween.
Preferably, in S41, G (R) i3 ,R i4 ) The implementation mode of the method is as follows:
Figure BDA0003169697720000041
wherein n is w Is the median plane normal;
∠(R i4 ,n w ) Is a vector R i4 And the normal n of the intermediate surface w The included angle therebetween;
∠(R i3 ,n w ) Is a vector R i3 And the normal n of the intermediate surface w The included angle therebetween.
Preferably, S6, for n initial reconstructed images ρ 1 To rho n The implementation mode of performing image fusion to obtain a fused target reconstruction image I is as follows:
Figure BDA0003169697720000042
wherein, w i1 ,ρ 2 ,...,ρ n ) Is equal to rho 1 ,ρ 2 ,...,ρ n The associated weight function.
Preferably, w i1 ,ρ 2 ,...,ρ n ) Is implemented by a weighted average function, an
Figure BDA0003169697720000043
Preferably, n is in the range of 3. Ltoreq. N.ltoreq.9.
Preferably, the array single photon camera is implemented by a DTOF camera.
The invention has the following beneficial effects: the invention introduces multi-point illumination by improving the active illumination mode, thus improving the imaging quality of the active non-visual field imaging system, enhancing the adaptability to the imaging area, not being limited to imaging the hidden target at a specific position and angle, having wider applicable scene, and providing an effective solution for the image reconstruction of the hidden target at a non-specific position and angle.
The present invention is an improvement over existing single illumination spot active non-field-of-view imaging systems. The multi-point illumination mode provided by the invention has a small number of illumination points which are far smaller than the number of illumination points of a confocal scanning system in the prior art, and a quick scanning device is not needed. That is to say, the multi-point illumination active non-visual field array imaging system of the invention omits the rapid scanning step, only needs to select a plurality of illumination points, can realize the multiple scattering of the laser emitted by each illumination point and then send the laser to the array single photon camera for the acquisition of the laser signal, the whole imaging method needs less data volume, the process is simple, and the realization is convenient.
When the active non-vision field array imaging method based on multi-point illumination is particularly applied, even if the position and the angle of a target are not in the optimal area of an imaging system, the target can still be reconstructed to obtain an accurate target image by selecting a plurality of illumination points to perform illumination compensation on the target. By changing the position of the illumination point, the image reconstruction can be performed on the targets at a plurality of positions and angles, and the imaging quality and the adaptability to the imaging area of the active non-vision field imaging system are greatly improved.
The method has practical application value, and can reconstruct target images with various positions and different angles through different illumination point positions for the hidden target in practical application.
Drawings
FIG. 1 is a schematic illustration of a prior art active non-view imaging system in the background art;
FIG. 2 is a schematic diagram of the principle of formation of an illumination spot;
FIG. 3 is a schematic diagram of a multi-spot illuminated active non-field-of-view array imaging system according to the present invention with illumination spot locations selected;
fig. 4 is a schematic view of the propagation direction of laser light emitted each time by the narrow pulse laser 1; in the figure, the laser light emitted by a narrow-pulse laser 1 at a time is divided into 4 propagation stages, S j Represents the j-th detection point in the imaging area of the interface 2, wherein j is an integer.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment is described with specific reference to fig. 2 and fig. 3, and the multipoint illumination-based active non-visual field array imaging method according to the embodiment is implemented by a multipoint illumination-based active non-visual field array imaging system, where the multipoint illumination-based active non-visual field array imaging system includes a narrow pulse laser 1, an interposer 2, an array single photon camera 3 and a target object 4, and the narrow pulse laser 1 and the array single photon camera 3 operate synchronously, and the method includes the following steps:
s1, placing a narrow pulse laser 1, an array single-photon camera 3 and a target object 4 on the same side of an intermediate interface 2, wherein the target object 4 is not in a visual field of the array single-photon camera 3, and dividing the intermediate interface 2 into an imaging area and a non-imaging area;
s2, emitting n times of laser to the non-imaging area of the medium surface 2 by the narrow pulse laser 1 in a time-sharing manner, forming an illumination point on the non-imaging area of the medium surface 2 by the laser emitted each time, forming n illumination points together, emitting m pulses by the narrow pulse laser 1 at each illumination point, wherein m is an integer greater than or equal to 1; the positions of the n illumination points are different; n is an integer greater than or equal to 2; the n illumination points are respectively in one-to-one correspondence with n times of laser emitted by the narrow pulse laser 1;
an illumination point for illuminating the target object 4;
the propagation direction of the laser light emitted by the narrow-pulse laser 1 at each time is as follows: the narrow pulse laser 1 emits laser to a non-imaging area of the intermediate surface 2, the laser is firstly scattered by the non-imaging area of the intermediate interface 2 and then is incident on the target object 4, is secondly scattered by the target object 4 and then is incident on an imaging area of the intermediate surface 2, and is thirdly scattered by the imaging area of the intermediate surface 2 and then is incident on the array single-photon camera 3;
s3, the array single photon camera 3 collects the space-time information of the laser after the third scattering corresponding to the laser emitted by the narrow pulse laser 1 each time, so as to obtain P i (ii) a i is an integer, i =1,2 \8230, 8230n;
wherein, P i A matrix containing time and space information is included in the laser after the third scattering corresponding to the laser emitted from the narrow pulse laser 1 for the ith time and collected by the array single photon camera 3;
s4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the ith laser emitted by the narrow pulse laser 1 i
S5, utilizing P i And H i Carrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laser i
Wherein H i An optical field transmission matrix of an imaging system corresponding to the ith laser emitted by the narrow pulse laser 1;
s6, aiming at n initial reconstruction images rho 1 To ρ n And carrying out image fusion so as to obtain a fused target reconstruction image I.
In the embodiment, the multi-point illumination is introduced by improving the active illumination mode, so that the imaging quality of the active non-visual field imaging system can be improved, the adaptability to the imaging area is enhanced, the imaging is not limited to the imaging of the hidden target at a specific position and at a specific angle, the applicable scene is wider, and an effective solution is provided for the image reconstruction of the hidden target at a non-specific position and at a non-specific angle.
In the multipoint illumination type active non-vision field imaging system, multipoint illumination means that a plurality of illumination points are formed on the medium surface 2, each illumination point corresponds to a group of laser data acquired by the array single-photon camera 3, a target image is reconstructed by processing a plurality of groups of laser data through a calculation imaging algorithm, the data amount required by the imaging process is small, the calculation amount is small, and the imaging process is simple, efficient and convenient to realize.
The process of target image reconstruction through multi-point illumination can be mainly divided into two steps, firstly, n groups of laser data collected by the array single-photon camera 3 are respectively reconstructed to obtain n reconstructed initial reconstruction images, and then the initial reconstruction images are fused through an image fusion method to obtain a final image reconstruction result. Each group of data acquired by the array single-photon camera 3 is a laser signal after third scattering corresponding to the laser emitted by the narrow-pulse laser 1 each time.
The multipoint illumination active non-vision field array imaging system is realized by a narrow pulse laser 1, an intermediate surface 2, an array single photon camera 3 and a target object 4, and the system is simple in structure; and the whole transmission process from the laser emitted by the narrow pulse laser 1 to the collection by the array single photon camera 3 is divided into 4 stages, specifically referring to fig. 4, that is: the method comprises the steps that laser starts from a narrow pulse laser 1 and enters an intermediate surface 2, the laser forms an illumination point on the intermediate surface 2 in the first stage, the laser entering the intermediate surface 2 in the second stage is emitted through first scattering (namely laser scattered from the illumination point) and enters a target object 4, the laser scattered from the illumination point in the third stage is emitted into an imaging area of the intermediate surface 2 after being scattered for the second time through the target object 4, detection points corresponding to pixels of a single-photon array camera 3 are formed in the imaging area of the intermediate surface 2 in the third stage, the laser entering the intermediate surface 2 after being scattered for the second time through the target object 4 is emitted into the array camera 3 after being scattered for the third time through the intermediate surface 2, and a single photon of target information is contained in a third-time scattering signal.
When the active non-vision field array imaging method based on multi-point illumination is particularly applied, even if the position and the angle of a target are not in the optimal area of an imaging system, the target can still be reconstructed to obtain an accurate target image by selecting a plurality of illumination points to perform illumination compensation on the target. By changing the position of the illumination point, the image reconstruction can be performed on the targets at a plurality of positions and angles, and the imaging quality and the adaptability to the imaging area of the active non-vision field imaging system are greatly improved.
The method has practical application value, and in practical application, for the hidden target, target images with various positions and different angles can be reconstructed through different illumination point positions.
Further, S5, using P i And H i Carrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laser i The implementation mode of the method is as follows:
ρ i =P i H i -1 (formula one).
In the preferred embodiment, P is i And H i The initial reconstruction image rho corresponding to the ith emitted laser can be obtained by carrying out the inverse operation of (1) i And the implementation process is simple.
Further, specifically referring to fig. 4 and S4, a light field transmission matrix H of the non-visual field imaging system corresponding to the ith laser emitted from the narrow pulse laser 1 is obtained i The implementation mode of the method comprises the following steps:
s41, constructing a point spread function H of a non-visual field imaging system corresponding to the ith laser emitted by the narrow pulse laser 1 i (L i ,S,O);
H i (L i ,S,O)=KP PL (L i )ρ(L i )G(R i1 ,R i2 )G(R i2 ,R i3 )ρ(S)G(R i3 ,R i4 ) (formula two);
wherein the content of the first and second substances,
k is the responsivity of the array single photon camera 3;
ρ(L i ) Is the ith illumination point L on the medium interface 2 i The scattering coefficient of the non-imaging area;
P PL (L i ) For the ith illumination point L i The light intensity of the ith laser emitted by the corresponding narrow pulse laser 1;
G(R i1 ,R i2 ) Is R i1 And R i2 Geometric scattering factor in between;
R i1 is an illumination point L on a non-imaging area of the narrow pulse laser 1 and the intermediate surface 2 i A distance vector between;
R i2 for the laser from the i-th illumination point L i Starting, a distance vector incident on the target object 4;
G(R i2 ,R i3 ) Is R i2 And R i3 Geometric scattering factor in between;
R i3 for the ith illumination point L i The emitted laser is scattered by the target object 4 and then is incident to a distance vector on an imaging area of the intermediate surface 2;
G(R i3 ,R i4 ) Is R i3 And R i4 Geometric scattering factor in between;
R i4 for the ith illumination point L i The emitted laser is scattered to a distance vector on the array single photon camera 3 through an imaging area of the intermediate surface 2;
ρ (S) is the scattering coefficient of the imaged area on the intermediate surface 2; o is the plane of the target object 4;
s is the imaging area of the intermediate surface 2;
s42, point spread function H of non-visual field imaging system i (L i S, O) to obtain H i
In the preferred embodiment, a light field transmission matrix H of the non-visual field imaging system corresponding to the ith laser beam emitted from the narrow pulse laser 1 is given i The whole construction is realized based on 4 propagation stages of the laser emitted by the narrow pulse laser 1.
Further, with specific reference to FIG. 4, in S41, G (R) i1 ,R i2 ) The implementation mode of the method is as follows:
Figure BDA0003169697720000081
wherein n is w Is the median plane normal;
∠(R i2 ,n w ) Represents a vector R i2 And the normal n of the intermediate surface w The included angle therebetween.
In the preferred embodiment, the obtaining of G (R) is given i1 ,R i2 ) The specific implementation mode of the method is simple in implementation process, and is implemented by using the relevant information of the corresponding propagation stage, so that the method is convenient to implement.
Further, with specific reference to FIG. 4, in S41, G (R) i2 ,R i3 ) The implementation mode of the method is as follows:
Figure BDA0003169697720000091
wherein the content of the first and second substances,
n o is the normal to the surface of the target object 4;
∠(R i2 ,n o ) Is a vector R i2 Normal n to the surface of the target object 4 o The included angle therebetween;
∠(R i3 ,n o ) Is a vector R i3 Normal n to the surface of the target object 4 o The included angle therebetween.
In the preferred embodiment, the obtaining of G (R) is given i2 ,R i3 ) The specific implementation mode of the method is simple in implementation process, and is realized by using the relevant information of the corresponding propagation stage, so that the method is convenient to implement.
Further, with specific reference to FIG. 4, in S41, G (R) i3 ,R i4 ) The implementation mode of the method is as follows:
Figure BDA0003169697720000092
wherein n is w Is a medium planeA normal line;
∠(R i4 ,n w ) Is a vector R i4 And the normal n of the intermediate surface w The included angle therebetween;
∠(R i3 ,n w ) Is a vector R i3 And the normal n of the intermediate surface w The included angle therebetween.
In the preferred embodiment, the obtaining of G (R) is given i3 ,R i4 ) The specific implementation mode of the method is simple in implementation process, and is realized by using the relevant information of the corresponding propagation stage, so that the method is convenient to implement.
Further, S6, for n initial reconstructed images rho 1 To ρ n The implementation mode of performing image fusion to obtain a fused target reconstructed image I is as follows:
Figure BDA0003169697720000093
wherein w i1 ,ρ 2 ,...,ρ n ) Is equal to rho 1 ,ρ 2 ,...,ρ n The associated weight function.
In the embodiment, a specific implementation mode of image fusion is provided, so that the operation is simple and the implementation is convenient.
Further, w i1 ,ρ 2 ,...,ρ n ) Is implemented by a weighted average function, an
Figure BDA0003169697720000101
Furthermore, the value range of n is more than or equal to 3 and less than or equal to 9.
Furthermore, the array single photon camera 3 is realized by a DTOF camera.
Further, the narrow pulse laser 1 is realized by a picosecond or femtosecond laser.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that various dependent claims and the features described herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (9)

1. The active non-visual field array imaging method based on the multipoint illumination is realized by an active non-visual field array imaging system based on the multipoint illumination, the active non-visual field array imaging system based on the multipoint illumination comprises a narrow pulse laser (1), an intermediary surface (2), an array single photon camera (3) and a target object (4), and the narrow pulse laser (1) and the array single photon camera (3) work synchronously, and the method is characterized by comprising the following steps of:
s1, placing a narrow pulse laser (1), an array single photon camera (3) and a target object (4) on the same side of an intermediate interface (2), wherein the target object (4) is not in the field of view of the array single photon camera (3), and dividing the intermediate interface (2) into an imaging area and a non-imaging area;
s2, emitting n times of laser light to a non-imaging area of the medium surface (2) by the narrow pulse laser (1) in a time-sharing manner, forming an illumination point on the non-imaging area of the medium surface (2) by the laser light emitted each time, forming n illumination points together, emitting m pulses at each illumination point by the narrow pulse laser (1), wherein m is an integer greater than or equal to 1; the positions of the n illumination points are different; n is an integer greater than or equal to 2; the n illumination points are respectively in one-to-one correspondence with n times of laser emitted by the narrow pulse laser (1);
an illumination point for illuminating the target object (4);
the propagation direction of the laser emitted by the narrow-pulse laser (1) every time is as follows: the narrow pulse laser (1) emits laser to a non-imaging area of the intermediate surface (2), the laser is incident on a target object (4) after being scattered for the first time by the non-imaging area of the intermediate surface (2), is incident on an imaging area of the intermediate surface (2) after being scattered for the second time by the target object (4), and is incident on the array single photon camera (3) after being scattered for the third time by the imaging area of the intermediate surface (2);
s3, carrying out space-time information acquisition on the laser after third scattering corresponding to the laser emitted by the narrow pulse laser (1) each time by the array single photon camera (3), thereby obtaining P i (ii) a i is an integer, i =1,2 \8230, 8230n;
wherein, P i A matrix containing time and space information is included in laser after third scattering corresponding to laser emitted from the ith time of the narrow pulse laser (1) collected by the array single photon camera (3);
s4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the ith emitted laser of the narrow pulse laser (1) i The method specifically comprises the following steps:
s41, constructing a point spread function H of a non-visual field imaging system corresponding to the ith laser emitted by the narrow pulse laser (1) i (L i ,S,O);
H i (L i ,S,O)=KP PL (L i )ρ(L i )G(R i1 ,R i2 )G(R i2 ,R i3 )ρ(S)G(R i3 ,R i4 ) (formula two);
wherein the content of the first and second substances,
k is the responsivity of the array single photon camera (3);
ρ(L i ) Is the ith illumination point L on the intermediate surface (2) i The scattering coefficient of the non-imaging area;
P PL (L i ) For the ith illumination point L i The light intensity of the ith laser emitted by the corresponding narrow pulse laser (1);
G(R i1 ,R i2 ) Is R i1 And R i2 Geometric scattering factor in between;
R i1 is an illumination point L on a non-imaging area of the narrow pulse laser (1) and the intermediate surface (2) i A distance vector therebetween;
R i2 for the laser from the i-th illumination point L i Starting, a distance vector incident on the target object (4);
G(R i2 ,R i3 ) Is R i2 And R i3 Geometric scattering factor in between;
R i3 for the ith illumination point L i The emitted laser is scattered by the target object (4) and then is incident to a distance vector on an imaging area of the intermediate surface (2);
G(R i3 ,R i4 ) Is R i3 And R i4 Geometric scattering factor in between;
R i4 for the ith illumination point L i The emitted laser is scattered to a distance vector on the array single-photon camera (3) through an imaging area of the intermediate surface (2);
rho (S) is the scattering coefficient of the imaging area on the intermediate surface (2); o is the plane of the target object (4);
s is an imaging area of the intermediate surface (2);
s42, point spread function H of non-visual field imaging system i (L i S, O) to obtain H i
S5, using P i And H i Carrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laser i
Wherein H i An optical field transmission matrix of an imaging system corresponding to the ith laser emitted by the narrow pulse laser (1);
s6, aiming at n initial reconstruction images rho 1 To rho n And carrying out image fusion so as to obtain a fused target reconstruction image I.
2. The active non-view array imaging method based on multi-point illumination according to claim 1, characterized in that S5, P is used i And H i Carrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laser i The implementation mode of the method is as follows:
ρ i =P i H i -1 (formula one).
3. The active non-view array imaging method based on multi-point illumination according to claim 1, wherein in S41,G(R i1 ,R i2 ) The implementation mode of the method is as follows:
Figure FDA0003882468370000021
wherein n is w Is the median plane normal;
∠(R i2 ,n w ) Represents a vector R i2 And the normal n of the intermediate surface w The included angle therebetween.
4. The active non-view array imaging method based on multi-point illumination according to claim 1, characterized in that in S41, G (R) is i2 ,R i3 ) The implementation mode of the method is as follows:
Figure FDA0003882468370000031
wherein, the first and the second end of the pipe are connected with each other,
n o is the normal of the surface of the target object (4);
∠(R i2 ,n o ) Is a vector R i2 Normal n to the surface of the target object (4) o The included angle therebetween;
∠(R i3 ,n o ) Is a vector R i3 Normal n to the surface of the target object (4) o The included angle therebetween.
5. The active non-vision field array imaging method based on multi-point illumination according to claim 1, wherein in S41, G (R) is i3 ,R i4 ) The implementation mode of the method is as follows:
Figure FDA0003882468370000032
wherein n is w Is the mid-plane normal;
∠(R i4 ,n w ) Is a vector R i4 And the normal n of the intermediate surface w The included angle therebetween;
∠(R i3 ,n w ) Is a vector R i3 And the normal n of the intermediate surface w The included angle therebetween.
6. The active non-view array imaging method based on multi-point illumination according to claim 1, characterized in that S6, p is the number of n initial reconstructed images 1 To ρ n The implementation mode of performing image fusion to obtain a fused target reconstruction image I is as follows:
Figure FDA0003882468370000033
wherein w i12 ,...,ρ n ) Is equal to rho 12 ,...,ρ n The associated weight function.
7. The method of claim 6, wherein w is w i12 ,...,ρ n ) Is implemented by a weighted average function, an
Figure FDA0003882468370000034
8. The active non-vision field array imaging method based on multi-point illumination according to claim 1, wherein n is in a range of 3. Ltoreq. N.ltoreq.9.
9. The active non-vision field array imaging method based on multipoint illumination according to claim 1, characterized in that the array single photon camera (3) is implemented with DTOF camera.
CN202110814980.9A 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination Active CN113556476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110814980.9A CN113556476B (en) 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110814980.9A CN113556476B (en) 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination

Publications (2)

Publication Number Publication Date
CN113556476A CN113556476A (en) 2021-10-26
CN113556476B true CN113556476B (en) 2023-04-07

Family

ID=78132149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110814980.9A Active CN113556476B (en) 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination

Country Status (1)

Country Link
CN (1) CN113556476B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772428A (en) * 2016-12-15 2017-05-31 哈尔滨工业大学 A kind of non-ken three-dimensional image forming apparatus of no-raster formula photon counting and method
CN111694014A (en) * 2020-06-16 2020-09-22 中国科学院西安光学精密机械研究所 Laser non-visual field three-dimensional imaging scene modeling method based on point cloud model
CN111880194A (en) * 2020-08-10 2020-11-03 中国科学技术大学 Non-visual field imaging device and method
CN112444821A (en) * 2020-11-11 2021-03-05 中国科学技术大学 Remote non-visual field imaging method, apparatus, device and medium
CN112946990A (en) * 2021-05-13 2021-06-11 清华大学 Non-vision field dynamic imaging system based on confocal mode

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772428A (en) * 2016-12-15 2017-05-31 哈尔滨工业大学 A kind of non-ken three-dimensional image forming apparatus of no-raster formula photon counting and method
CN111694014A (en) * 2020-06-16 2020-09-22 中国科学院西安光学精密机械研究所 Laser non-visual field three-dimensional imaging scene modeling method based on point cloud model
CN111880194A (en) * 2020-08-10 2020-11-03 中国科学技术大学 Non-visual field imaging device and method
CN112444821A (en) * 2020-11-11 2021-03-05 中国科学技术大学 Remote non-visual field imaging method, apparatus, device and medium
CN112946990A (en) * 2021-05-13 2021-06-11 清华大学 Non-vision field dynamic imaging system based on confocal mode

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于激光距离选通成像的非视域成像应用;许凯达等;《红外与激光工程》;20120825(第08期);2073-2078 *
基于激光距离选通的非视域成像特性分析;许凯达等;《兵工学报》;20141215(第12期);2003-2009 *
非视域成像***的研究现状和发展趋势;李国栋等;《导航与控制》;20200205(第01期);27-33 *

Also Published As

Publication number Publication date
CN113556476A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
Steinvall et al. Gated viewing for target detection and target recognition
Bruno et al. Experimentation of structured light and stereo vision for underwater 3D reconstruction
Cho et al. Three-dimensional optical sensing and visualization using integral imaging
Levoy et al. Synthetic aperture confocal imaging
CN100524015C (en) Method and apparatus for generating range subject distance image
Repasi et al. Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations
US7206131B2 (en) Optic array for three-dimensional multi-perspective low observable signature control
US10359277B2 (en) Imaging system with synchronized dynamic control of directable beam light source and reconfigurably masked photo-sensor
Berger et al. Depth from stereo polarization in specular scenes for urban robotics
KR102135177B1 (en) Method and apparatus for implemeting active imaging system
Liu et al. Simulation of light-field camera imaging based on ray splitting Monte Carlo method
CN114659635B (en) Spectral depth imaging device and method based on image surface segmentation light field
CN109708763A (en) Based on microlens array transmitting-receiving bidirectional continuous scanning near infrared imaging system
Dolin et al. Theory of imaging through wavy sea surface
Chandran et al. Adaptive lighting for data-driven non-line-of-sight 3d localization and object identification
Ma et al. Super-resolution and super-robust single-pixel superposition compound eye
CN206546159U (en) Microscopic three-dimensional measurement apparatus and system
Faccio Non-line-of-sight imaging
US20200018592A1 (en) Energy optimized imaging system with synchronized dynamic control of directable beam light source and reconfigurably masked photo-sensor
Boynton The visual system: Environmental information
CN113556476B (en) Active non-vision field array imaging method based on multi-point illumination
CN112802142A (en) Non-vision field imaging method and system
Zhao et al. Polarization-based approach for multipath interference mitigation in time-of-flight imaging
Choudhury et al. Simultaneous enhancement of scanning area and imaging speed for a MEMS mirror based high resolution LiDAR
AU2020408599A1 (en) Light field reconstruction method and system using depth sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant