CN113556476A - Active non-vision field array imaging method based on multi-point illumination - Google Patents

Active non-vision field array imaging method based on multi-point illumination Download PDF

Info

Publication number
CN113556476A
CN113556476A CN202110814980.9A CN202110814980A CN113556476A CN 113556476 A CN113556476 A CN 113556476A CN 202110814980 A CN202110814980 A CN 202110814980A CN 113556476 A CN113556476 A CN 113556476A
Authority
CN
China
Prior art keywords
laser
illumination
imaging
point
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110814980.9A
Other languages
Chinese (zh)
Other versions
CN113556476B (en
Inventor
靳辰飞
田小芮
马俊锋
杨杰
唐勐
乔凯
史晓洁
张思琦
刘丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110814980.9A priority Critical patent/CN113556476B/en
Publication of CN113556476A publication Critical patent/CN113556476A/en
Application granted granted Critical
Publication of CN113556476B publication Critical patent/CN113556476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • G03B15/06Special arrangements of screening, diffusing, or reflecting devices, e.g. in studio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

An active non-vision field array imaging method based on multi-point illumination relates to the field of optical imaging. The invention solves the problems that the existing single-point active non-vision field array imaging system can only image the hidden target at a specific position and angle and has poor adaptability to an imaging area. The method is realized by an active non-visual field array imaging system based on multi-point illumination, and multi-point illumination is introduced by improving an active illumination mode, so that the imaging quality of the active non-visual field imaging system can be improved, the adaptability to an imaging area is enhanced, and the method is not limited to imaging hidden targets at specific positions and angles. The method mainly comprises two steps of target image reconstruction through multi-point illumination, firstly, n groups of laser data collected by an array single-photon camera are respectively reconstructed to obtain n reconstructed initial reconstruction images, and then a plurality of initial reconstruction images are fused through an image fusion method to obtain a final image reconstruction result. The method is mainly used for imaging the hidden target.

Description

Active non-vision field array imaging method based on multi-point illumination
Technical Field
The present invention relates to the field of optical imaging.
Background
In various complex application scenes such as urban battles, disaster relief, security protection anti-terrorism, unmanned driving and the like, objects such as walls, street corners, obstacles and the like often form hard blocking on light rays, under the condition, the visual field of the traditional imaging system is almost completely limited, targets at corners cannot be seen, and the direct imaging on the targets is almost impossible.
Therefore, with the rapid development of the field of unmanned driving in recent years, non-visual field imaging techniques specifically directed to the above-described situation have been produced based on a computational optical imaging method. Non-field of view imaging techniques are primarily used to reconstruct an image of a target by capturing scattered or reflected light from the ambient environment of the target ray, followed by computational imaging algorithms. A schematic diagram of a prior art active non-field-of-view imaging system is shown in fig. 1 below; the main components of the system are a narrow pulse laser, a middle interface, a target, and an array single photon camera. The main feature of a single-point active non-field-of-view imaging system is that the illumination point is only one and is fixed. The feasibility of this approach was experimentally verified, but there was a significant drawback. The whole system has high requirements on the space position, the three-dimensional angle, the material and the structure of a scene of a target, can only image a hidden target at a specific position and angle, and has poor adaptability to an imaging area. In practical applications, the non-visual field imaging system needs to have high adaptability to various imaging scenes. Therefore, the above problems need to be solved.
Disclosure of Invention
The invention aims to solve the problems that the existing single-point active non-vision field imaging system can only image hidden targets at specific positions and angles and has poor adaptability to an imaging area; the invention provides an active non-vision field array imaging method based on multi-point illumination.
The active non-visual field array imaging method based on the multipoint illumination is realized by an active non-visual field array imaging system based on the multipoint illumination, the active non-visual field array imaging system based on the multipoint illumination comprises a narrow pulse laser, a middle interface, an array single photon camera and a target object, the narrow pulse laser and the array single photon camera work synchronously, and the method comprises the following steps:
s1, placing the narrow pulse laser, the array single-photon camera and the target object on the same side of the middle interface, wherein the target object is not in the field of view of the array single-photon camera, and the middle interface is divided into an imaging area and a non-imaging area;
s2, emitting n times of laser to the non-imaging area of the interface in a time-sharing manner by the narrow pulse laser, forming an illumination point on the non-imaging area of the interface by the laser emitted each time, forming n illumination points in a conformal manner, emitting m pulses by the narrow pulse laser at each illumination point, wherein m is an integer greater than or equal to 1; the positions of the n illumination points are different; n is an integer greater than or equal to 2; the n illumination points are respectively in one-to-one correspondence with n times of laser emitted by the narrow pulse laser;
an illumination point for illuminating the target object;
the propagation direction of the laser emitted by the narrow-pulse laser at each time is as follows: the narrow pulse laser emits laser to the non-imaging area of the middle interface, the laser is incident to a target object after being scattered for the first time by the non-imaging area of the middle interface, is incident to the imaging area of the intermediate interface after being scattered for the second time by the target object, and is incident to the array single photon camera after being scattered for the third time by the imaging area of the middle interface;
s3, carrying out third scattering post-excitation on the laser corresponding to the laser emitted by the narrow pulse laser each time by the array single photon cameraLight performs spatio-temporal information acquisition to obtain Pi(ii) a i is an integer, i is 1,2 … … n;
wherein, PiA matrix containing time and space information is included in the laser after the third scattering corresponding to the laser emitted from the narrow pulse laser for the ith time acquired by the array single photon camera;
s4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the ith laser emitted by the narrow pulse laseri
S5, use of PiAnd HiCarrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laseri
Wherein HiAn optical field transmission matrix of an imaging system corresponding to the laser emitted by the narrow pulse laser for the ith time;
s6, for n initial reconstruction images rho1To rhonAnd carrying out image fusion so as to obtain a fused target reconstruction image I.
Preferably, S5 uses PiAnd HiCarrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laseriThe implementation mode of the method is as follows:
ρi=PiHi -1(formula one).
Preferably, S4, obtaining the light field transmission matrix H of the non-visual-field imaging system corresponding to the ith laser emitted by the narrow-pulse laseriThe implementation mode comprises the following steps:
s41, constructing a point spread function H of the non-visual field imaging system corresponding to the ith laser emitted by the narrow pulse laseri(Li,S,O);
Hi(Li,S,O)=KPPL(Li)ρ(Li)G(Ri1,Ri2)G(Ri2,Ri3)ρ(S)G(Ri3,Ri4) (formula two);
wherein the content of the first and second substances,
k is the responsivity of the array single photon camera;
ρ(Li) Is an intermediate surfaceUpper ith illumination point LiThe scattering coefficient of the non-imaging area;
PPL(Li) For the ith illumination point LiThe light intensity of the ith laser emitted by the corresponding narrow pulse laser;
G(Ri1,Ri2) Is Ri1And Ri2Geometric scattering factor in between;
Ri1is an illumination point L on a non-imaging area of a narrow pulse laser and a middle interfaceiA distance vector between;
Ri2for laser light from the i-th illumination point LiStarting, and inputting a distance vector on the target object;
G(Ri2,Ri3) Is Ri2And Ri3Geometric scattering factor in between;
Ri3for the ith illumination point LiThe emitted laser is scattered by the target object and then is incident to a distance vector on an imaging area of the intermediate surface;
G(Ri3,Ri4) Is Ri3And Ri4Geometric scattering factor in between;
Ri4for the ith illumination point LiThe emitted laser is scattered to a distance vector on the array single photon camera through an imaging area of the intermediate surface;
rho (S) is the scattering coefficient of the imaging area on the middle interface; o is a target object plane;
s is the imaging area of the middle interface;
s42 Point spread function H for non-View imaging Systemi(LiS, O) to obtain Hi
Preferably, in S41, G (R)i1,Ri2) The implementation mode of the method is as follows:
Figure BDA0003169697720000031
wherein n iswIs the median plane normal;
∠(Ri2,nw) Represents a vector Ri2And the normal n of the intermediate surfacewThe included angle therebetween.
Preferably, in S41, G (R)i2,Ri3) The implementation mode of the method is as follows:
Figure BDA0003169697720000032
wherein the content of the first and second substances,
nois the normal of the surface of the target object;
∠(Ri2,no) Is a vector Ri2Normal n to the surface of the target objectoThe included angle therebetween;
∠(Ri3,no) Is a vector Ri3Normal n to the surface of the target objectoThe included angle therebetween.
Preferably, in S41, G (R)i3,Ri4) The implementation mode of the method is as follows:
Figure BDA0003169697720000041
wherein n iswIs the median plane normal;
∠(Ri4,nw) Is a vector Ri4And the normal n of the intermediate surfacewThe included angle therebetween;
∠(Ri3,nw) Is a vector Ri3And the normal n of the intermediate surfacewThe included angle therebetween.
Preferably, S6 is performed for n initial reconstructed images ρ1To rhonThe implementation mode of performing image fusion to obtain a fused target reconstruction image I is as follows:
Figure BDA0003169697720000042
wherein, wi1,ρ2,...,ρn) Is equal to rho1,ρ2,...,ρnThe associated weight function.
Preferably, wi1,ρ2,...,ρn) Is implemented by a weighted average function, an
Figure BDA0003169697720000043
Preferably, n is in the range of 3. ltoreq. n.ltoreq.9.
Preferably, the array single photon camera is implemented by a DTOF camera.
The invention has the following beneficial effects: the invention introduces multi-point illumination by improving the active illumination mode, thus improving the imaging quality of the active non-visual field imaging system, enhancing the adaptability to the imaging area, not being limited to imaging the hidden target at a specific position and angle, having wider applicable scene, and providing an effective solution for the image reconstruction of the hidden target at a non-specific position and angle.
The present invention is an improvement over existing single illumination spot active non-field-of-view imaging systems. The multi-point illumination mode provided by the invention has a small number of illumination points which are far smaller than the number of illumination points of a confocal scanning system in the prior art, and a quick scanning device is not needed. That is to say, the multi-point illumination active non-visual field array imaging system of the invention omits the rapid scanning step, only needs to select a plurality of illumination points, can realize the multiple scattering of the laser emitted by each illumination point and then send the laser to the array single photon camera for the acquisition of the laser signal, the whole imaging method needs less data volume, the process is simple, and the realization is convenient.
When the active non-vision field array imaging method based on multi-point illumination is particularly applied, even if the position and the angle of a target are not in the optimal area of an imaging system, the target can still be reconstructed to obtain an accurate target image by selecting a plurality of illumination points to perform illumination compensation on the target. By changing the position of the illumination point, the image reconstruction can be performed on the targets at a plurality of positions and angles, and the imaging quality and the adaptability to the imaging area of the active non-vision field imaging system are greatly improved.
The method has practical application value, and can reconstruct target images with various positions and different angles through different illumination point positions for the hidden target in practical application.
Drawings
FIG. 1 is a schematic illustration of a prior art active non-view imaging system in the background art;
FIG. 2 is a schematic diagram of the principle of formation of an illumination spot;
FIG. 3 is a schematic diagram of a multi-spot illuminated active non-field-of-view array imaging system according to the present invention with illumination spot locations selected;
fig. 4 is a schematic view of the propagation direction of laser light emitted each time by the narrow pulse laser 1; in the figure, the laser light emitted by a narrow-pulse laser 1 at a time is divided into 4 propagation stages, SjRepresents the j-th detection point in the imaging area of the interface 2, wherein j is an integer.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment is described with specific reference to fig. 2 and fig. 3, and the multipoint illumination-based active non-visual area array imaging method according to the embodiment is implemented by a multipoint illumination-based active non-visual area array imaging system, where the multipoint illumination-based active non-visual area array imaging system includes a narrow pulse laser 1, an interposer 2, an array single photon camera 3 and a target object 4, and the narrow pulse laser 1 and the array single photon camera 3 operate synchronously, and the method includes the following steps:
s1, placing the narrow pulse laser 1, the array single photon camera 3 and the target object 4 on the same side of the intermediate surface 2, wherein the target object 4 is not in the field of view of the array single photon camera 3, and dividing the intermediate surface 2 into an imaging area and a non-imaging area;
s2, emitting n times of laser to the non-imaging area of the medium surface 2 by the narrow pulse laser 1 in a time-sharing manner, forming an illumination point on the non-imaging area of the medium surface 2 by the laser emitted each time, forming n illumination points together, emitting m pulses by the narrow pulse laser 1 at each illumination point, wherein m is an integer greater than or equal to 1; the positions of the n illumination points are different; n is an integer greater than or equal to 2; the n illumination points are respectively in one-to-one correspondence with n times of laser emitted by the narrow pulse laser 1;
an illumination point for illuminating the target object 4;
the propagation direction of the laser light emitted by the narrow-pulse laser 1 at each time is as follows: the narrow pulse laser 1 emits laser to a non-imaging area of the intermediate surface 2, the laser is firstly scattered by the non-imaging area of the intermediate interface 2 and then is incident on the target object 4, is secondly scattered by the target object 4 and then is incident on an imaging area of the intermediate surface 2, and is thirdly scattered by the imaging area of the intermediate surface 2 and then is incident on the array single-photon camera 3;
s3, the array single photon camera 3 collects the space-time information of the laser after the third scattering corresponding to the laser emitted by the narrow pulse laser 1 each time, thereby obtaining Pi(ii) a i is an integer, i ═ 1,2 ….. n;
wherein, PiA matrix containing time and space information is included in the laser after the third scattering corresponding to the laser emitted from the narrow pulse laser 1 for the ith time and collected by the array single photon camera 3;
s4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the ith laser emitted by the narrow pulse laser 1i
S5, use of PiAnd HiCarrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laseri
Wherein HiCorresponding to the ith laser beam emitted from the narrow pulse laser 1An imaging system light field transmission matrix;
s6, for n initial reconstruction images rho1To rhonAnd carrying out image fusion so as to obtain a fused target reconstruction image I.
In the embodiment, the multi-point illumination is introduced by improving the active illumination mode, so that the imaging quality of the active non-visual field imaging system can be improved, the adaptability to the imaging area is enhanced, the imaging is not limited to the imaging of the hidden target at a specific position and at a specific angle, the applicable scene is wider, and an effective solution is provided for the image reconstruction of the hidden target at a non-specific position and at a non-specific angle.
In the multipoint illumination type active non-vision field imaging system, multipoint illumination means that a plurality of illumination points are formed on the medium surface 2, each illumination point corresponds to a group of laser data acquired by the array single-photon camera 3, a target image is reconstructed by processing a plurality of groups of laser data through a calculation imaging algorithm, the data amount required by the imaging process is small, the calculation amount is small, and the imaging process is simple, efficient and convenient to realize.
The process of target image reconstruction through multi-point illumination can be mainly divided into two steps, firstly, n groups of laser data collected by the array single-photon camera 3 are respectively reconstructed to obtain n reconstructed initial reconstruction images, and then the initial reconstruction images are fused through an image fusion method to obtain a final image reconstruction result. Each group of data acquired by the array single-photon camera 3 is a laser signal after third scattering corresponding to the laser emitted by the narrow-pulse laser 1 each time.
The multipoint illumination active non-vision field array imaging system is realized by a narrow pulse laser 1, an intermediate surface 2, an array single photon camera 3 and a target object 4, and the system is simple in structure; and the whole transmission process from the laser emitted by the narrow pulse laser 1 to the collection by the array single photon camera 3 is divided into 4 stages, specifically referring to fig. 4, that is: the laser device comprises a first stage, a second stage and a third stage, wherein laser starts from a narrow pulse laser 1 and enters an intermediate surface 2, the first stage laser forms an illumination point on the intermediate surface 2, the second stage laser enters a target object 4 through laser emitted by first scattering (namely laser scattered from the illumination point) and laser scattered from the illumination point enters an imaging area of the intermediate surface 2 after being scattered by the target object 4 for the second time, detection points corresponding to pixels of a single photon array camera 3 are formed on the imaging area of the intermediate surface 2 in the third stage, the fourth stage laser enters the intermediate surface 2 after being scattered by the target object 4 for the second time and enters the array single photon camera 3 after being scattered by the intermediate surface 2 for the third time, and information of a target is contained in a third-time scattering signal.
When the active non-vision field array imaging method based on multi-point illumination is particularly applied, even if the position and the angle of a target are not in the optimal area of an imaging system, the target can still be reconstructed to obtain an accurate target image by selecting a plurality of illumination points to perform illumination compensation on the target. By changing the position of the illumination point, the image reconstruction can be performed on the targets at a plurality of positions and angles, and the imaging quality and the adaptability to the imaging area of the active non-vision field imaging system are greatly improved.
The method has practical application value, and can reconstruct target images with various positions and different angles through different illumination point positions for the hidden target in practical application.
Further, S5, using PiAnd HiCarrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laseriThe implementation mode of the method is as follows:
ρi=PiHi -1(formula one).
In the preferred embodiment, P isiAnd HiThe initial reconstruction image rho corresponding to the ith emitted laser can be obtained by carrying out the inverse operation of (1)iAnd the implementation process is simple.
Further, referring to fig. 4, S4, obtaining a non-visual-area imaging system light field transmission matrix H corresponding to the ith laser beam emitted from the narrow pulse laser 1iThe implementation mode comprises the following steps:
s41 construction of narrow pulseThe point spread function H of the non-visual field imaging system corresponding to the laser emitted by the laser 1 for the ith timei(Li,S,O);
Hi(Li,S,O)=KPPL(Li)ρ(Li)G(Ri1,Ri2)G(Ri2,Ri3)ρ(S)G(Ri3,Ri4) (formula two);
wherein the content of the first and second substances,
k is the responsivity of the array single photon camera 3;
ρ(Li) Is the ith illumination point L on the medium interface 2iThe scattering coefficient of the non-imaging area;
PPL(Li) For the ith illumination point LiThe light intensity of the ith laser emitted by the corresponding narrow pulse laser 1;
G(Ri1,Ri2) Is Ri1And Ri2Geometric scattering factor in between;
Ri1is an illumination point L on a non-imaging area of the narrow pulse laser 1 and the intermediate surface 2iA distance vector between;
Ri2for laser light from the i-th illumination point LiStarting, a distance vector incident on the target object 4;
G(Ri2,Ri3) Is Ri2And Ri3Geometric scattering factor in between;
Ri3for the ith illumination point LiThe emitted laser is scattered by the target object 4 and then is incident to a distance vector on an imaging area of the intermediate surface 2;
G(Ri3,Ri4) Is Ri3And Ri4Geometric scattering factor in between;
Ri4for the ith illumination point LiThe emitted laser is scattered to a distance vector on the array single photon camera 3 through an imaging area of the intermediate surface 2;
ρ (S) is the scattering coefficient of the imaged area on the intermediate surface 2; o is the plane of the target object 4;
s is the imaging area of the intermediate surface 2;
S42、point spread function H for non-field-of-view imaging systemsi(LiS, O) to obtain Hi
In the preferred embodiment, a light field transmission matrix H of the non-visual field imaging system corresponding to the ith laser beam emitted from the narrow pulse laser 1 is giveniThe whole construction is realized based on 4 propagation stages of the laser emitted by the narrow pulse laser 1.
Further, with specific reference to FIG. 4, in S41, G (R)i1,Ri2) The implementation mode of the method is as follows:
Figure BDA0003169697720000081
wherein n iswIs the median plane normal;
∠(Ri2,nw) Represents a vector Ri2And the normal n of the intermediate surfacewThe included angle therebetween.
In the preferred embodiment, the obtaining of G (R) is giveni1,Ri2) The specific implementation mode of the method is simple in implementation process, and is implemented by using the relevant information of the corresponding propagation stage, so that the method is convenient to implement.
Further, with specific reference to FIG. 4, in S41, G (R)i2,Ri3) The implementation mode of the method is as follows:
Figure BDA0003169697720000091
wherein the content of the first and second substances,
nois the normal to the surface of the target object 4;
∠(Ri2,no) Is a vector Ri2Normal n to the surface of the target object 4oThe included angle therebetween;
∠(Ri3,no) Is a vector Ri3Normal n to the surface of the target object 4oThe included angle therebetween.
In the preferred embodiment, the obtaining of G (R) is giveni2,Ri3) AThe specific implementation mode is simple in implementation process, and the implementation is realized by using the relevant information of the corresponding propagation stage, so that the implementation is convenient.
Further, with specific reference to FIG. 4, in S41, G (R)i3,Ri4) The implementation mode of the method is as follows:
Figure BDA0003169697720000092
wherein n iswIs the median plane normal;
∠(Ri4,nw) Is a vector Ri4And the normal n of the intermediate surfacewThe included angle therebetween;
∠(Ri3,nw) Is a vector Ri3And the normal n of the intermediate surfacewThe included angle therebetween.
In the preferred embodiment, the obtaining of G (R) is giveni3,Ri4) The specific implementation mode of the method is simple in implementation process, and is implemented by using the relevant information of the corresponding propagation stage, so that the method is convenient to implement.
Further, S6, for n initial reconstructed images ρ1To rhonThe implementation mode of performing image fusion to obtain a fused target reconstruction image I is as follows:
Figure BDA0003169697720000093
wherein, wi1,ρ2,...,ρn) Is equal to rho1,ρ2,...,ρnThe associated weight function.
In the embodiment, a specific implementation mode of image fusion is provided, so that the operation is simple and the implementation is convenient.
Further, wi1,ρ2,...,ρn) Is implemented by a weighted average function, an
Figure BDA0003169697720000101
Furthermore, the value range of n is more than or equal to 3 and less than or equal to 9.
Further, the array single photon camera 3 is realized by a DTOF camera.
Further, the narrow pulse laser 1 is realized by a picosecond or femtosecond laser.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (10)

1. The active non-visual field array imaging method based on the multipoint illumination is realized by an active non-visual field array imaging system based on the multipoint illumination, the active non-visual field array imaging system based on the multipoint illumination comprises a narrow pulse laser (1), an intermediate surface (2), an array single photon camera (3) and a target object (4), and the narrow pulse laser (1) and the array single photon camera (3) work synchronously, and the method is characterized by comprising the following steps of:
s1, placing the narrow pulse laser (1), the array single photon camera (3) and the target object (4) on the same side of the intermediate surface (2), wherein the target object (4) is not in the field of view of the array single photon camera (3), and dividing the intermediate surface (2) into an imaging area and a non-imaging area;
s2, emitting n times of laser light to the non-imaging area of the medium surface (2) by the narrow pulse laser (1), forming an illumination point on the non-imaging area of the medium surface (2) by the laser light emitted each time, forming n illumination points together, emitting m pulses at each illumination point by the narrow pulse laser (1), wherein m is an integer greater than or equal to 1; the positions of the n illumination points are different; n is an integer greater than or equal to 2; the n illumination points are respectively in one-to-one correspondence with n times of laser emitted by the narrow pulse laser (1);
an illumination point for illuminating the target object (4);
the propagation direction of the laser emitted by the narrow-pulse laser (1) every time is as follows: the narrow pulse laser (1) emits laser to a non-imaging area of the intermediate surface (2), the laser is incident on a target object (4) after being scattered for the first time by the non-imaging area of the intermediate surface (2), is incident on an imaging area of the intermediate surface (2) after being scattered for the second time by the target object (4), and is incident on the array single photon camera (3) after being scattered for the third time by the imaging area of the intermediate surface (2);
s3, the array single photon camera (3) collects the space-time information of the laser after the third scattering corresponding to the laser emitted by the narrow pulse laser (1) each time, thereby obtaining Pi(ii) a i is an integer, i is 1,2 … … n;
wherein, PiA matrix containing time and space information is included in laser after third scattering corresponding to laser emitted from the ith time of the narrow pulse laser (1) collected by the array single photon camera (3);
s4, obtaining a non-visual field imaging system light field transmission matrix H corresponding to the ith laser emitted by the narrow pulse laser (1)i
S5, use of PiAnd HiCarrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laseri
Wherein HiAn optical field transmission matrix of an imaging system corresponding to the ith laser emitted by the narrow pulse laser (1);
s6, for n initial reconstruction images rho1To rhonAnd carrying out image fusion so as to obtain a fused target reconstruction image I.
2. The active non-view array imaging method based on multi-point illumination according to claim 1, characterized in that S5, using PiAnd HiCarrying out image reconstruction to obtain an initial reconstruction image rho corresponding to the ith emitted laseriTo realizeThe method comprises the following steps:
ρi=PiHi -1(formula one).
3. The active non-visual-field-array imaging method based on multi-point illumination according to claim 1, characterized in that S4, obtaining the non-visual-field imaging system light field transmission matrix H corresponding to the laser light emitted from the narrow-pulse laser (1) at the ith timeiThe implementation mode comprises the following steps:
s41, constructing a point spread function H of the non-visual field imaging system corresponding to the ith laser emitted by the narrow pulse laser (1)i(Li,S,O);
Hi(Li,S,O)=KPPL(Li)ρ(Li)G(Ri1,Ri2)G(Ri2,Ri3)ρ(S)G(Ri3,Ri4) (formula two);
wherein the content of the first and second substances,
k is the responsivity of the array single photon camera (3);
ρ(Li) Is the ith illumination point L on the intermediate surface (2)iThe scattering coefficient of the non-imaging area;
PPL(Li) For the ith illumination point LiThe light intensity of the ith laser emitted by the corresponding narrow pulse laser (1);
G(Ri1,Ri2) Is Ri1And Ri2Geometric scattering factor in between;
Ri1is an illumination point L on a non-imaging area of the narrow pulse laser (1) and the intermediate surface (2)iA distance vector between;
Ri2for laser light from the i-th illumination point LiStarting, a distance vector incident on the target object (4);
G(Ri2,Ri3) Is Ri2And Ri3Geometric scattering factor in between;
Ri3for the ith illumination point LiThe emitted laser is scattered by the target object (4) and then is incident to a distance vector on an imaging area of the intermediate surface (2);
G(Ri3,Ri4) Is Ri3And Ri4Geometric scattering factor in between;
Ri4for the ith illumination point LiThe emitted laser is scattered to a distance vector on the array single photon camera (3) through an imaging area of the intermediate surface (2);
rho (S) is the scattering coefficient of the imaging area on the intermediate surface (2); o is the plane of the target object (4);
s is an imaging area of the intermediate surface (2);
s42 Point spread function H for non-View imaging Systemi(LiS, O) to obtain Hi
4. The active non-view array imaging method based on multi-point illumination according to claim 3, characterized in that in S41, G (R) isi1,Ri2) The implementation mode of the method is as follows:
Figure FDA0003169697710000031
wherein n iswIs the median plane normal;
∠(Ri2,nw) Represents a vector Ri2And the normal n of the intermediate surfacewThe included angle therebetween.
5. The active non-view array imaging method based on multi-point illumination according to claim 3, characterized in that in S41, G (R) isi2,Ri3) The implementation mode of the method is as follows:
Figure FDA0003169697710000032
wherein the content of the first and second substances,
nois the normal of the surface of the target object (4);
∠(Ri2,no) Is a vector Ri2Normal n to the surface of the target object (4)oAngle between them;
∠(Ri3,no) Is a vector Ri3Normal n to the surface of the target object (4)oThe included angle therebetween.
6. The active non-view array imaging method based on multi-point illumination according to claim 3, characterized in that in S41, G (R) isi3,Ri4) The implementation mode of the method is as follows:
Figure FDA0003169697710000033
wherein n iswIs the median plane normal;
∠(Ri4,nw) Is a vector Ri4And the normal n of the intermediate surfacewThe included angle therebetween;
∠(Ri3,nw) Is a vector Ri3And the normal n of the intermediate surfacewThe included angle therebetween.
7. The active non-view array imaging method based on multi-point illumination according to claim 1, characterized in that S6 is applied to n initial reconstructed images p1To rhonThe implementation mode of performing image fusion to obtain a fused target reconstruction image I is as follows:
Figure FDA0003169697710000034
wherein, wi12,...,ρn) Is equal to rho12,...,ρnThe associated weight function.
8. The active non-view array imaging method based on multi-point illumination according to claim 7, wherein w isi12,...,ρn) Is implemented by a weighted average function, an
Figure FDA0003169697710000041
9. The active non-vision field array imaging method based on multi-point illumination according to claim 1, wherein n is in a range of 3. ltoreq. n.ltoreq.9.
10. The active non-view-field array imaging method based on multi-point illumination according to claim 1, characterized in that the array single photon camera (3) is implemented with a DTOF camera.
CN202110814980.9A 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination Active CN113556476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110814980.9A CN113556476B (en) 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110814980.9A CN113556476B (en) 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination

Publications (2)

Publication Number Publication Date
CN113556476A true CN113556476A (en) 2021-10-26
CN113556476B CN113556476B (en) 2023-04-07

Family

ID=78132149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110814980.9A Active CN113556476B (en) 2021-07-19 2021-07-19 Active non-vision field array imaging method based on multi-point illumination

Country Status (1)

Country Link
CN (1) CN113556476B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772428A (en) * 2016-12-15 2017-05-31 哈尔滨工业大学 A kind of non-ken three-dimensional image forming apparatus of no-raster formula photon counting and method
CN111694014A (en) * 2020-06-16 2020-09-22 中国科学院西安光学精密机械研究所 Laser non-visual field three-dimensional imaging scene modeling method based on point cloud model
CN111880194A (en) * 2020-08-10 2020-11-03 中国科学技术大学 Non-visual field imaging device and method
CN112444821A (en) * 2020-11-11 2021-03-05 中国科学技术大学 Remote non-visual field imaging method, apparatus, device and medium
CN112946990A (en) * 2021-05-13 2021-06-11 清华大学 Non-vision field dynamic imaging system based on confocal mode

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772428A (en) * 2016-12-15 2017-05-31 哈尔滨工业大学 A kind of non-ken three-dimensional image forming apparatus of no-raster formula photon counting and method
CN111694014A (en) * 2020-06-16 2020-09-22 中国科学院西安光学精密机械研究所 Laser non-visual field three-dimensional imaging scene modeling method based on point cloud model
CN111880194A (en) * 2020-08-10 2020-11-03 中国科学技术大学 Non-visual field imaging device and method
CN112444821A (en) * 2020-11-11 2021-03-05 中国科学技术大学 Remote non-visual field imaging method, apparatus, device and medium
CN112946990A (en) * 2021-05-13 2021-06-11 清华大学 Non-vision field dynamic imaging system based on confocal mode

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李国栋等: "非视域成像***的研究现状和发展趋势", 《导航与控制》 *
许凯达等: "基于激光距离选通成像的非视域成像应用", 《红外与激光工程》 *
许凯达等: "基于激光距离选通的非视域成像特性分析", 《兵工学报》 *

Also Published As

Publication number Publication date
CN113556476B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Musarra et al. Non-line-of-sight three-dimensional imaging with a single-pixel camera
Cho et al. Three-dimensional optical sensing and visualization using integral imaging
Levoy et al. Synthetic aperture confocal imaging
CN106604017B (en) Image display device
CN100524015C (en) Method and apparatus for generating range subject distance image
CN101496033B (en) Depth-varying light fields for three dimensional sensing
US10359277B2 (en) Imaging system with synchronized dynamic control of directable beam light source and reconfigurably masked photo-sensor
US20030164875A1 (en) System and method for passive three-dimensional data acquisition
CN108471487A (en) Generate the image device and associated picture device of panoramic range image
DE102015122842A1 (en) Calibration plate for calibrating a 3D measuring device and method therefor
CN103971405A (en) Method for three-dimensional reconstruction of laser speckle structured light and depth information
US7206131B2 (en) Optic array for three-dimensional multi-perspective low observable signature control
Berger et al. Depth from stereo polarization in specular scenes for urban robotics
CN114659635B (en) Spectral depth imaging device and method based on image surface segmentation light field
Dolin et al. Theory of imaging through wavy sea surface
Chandran et al. Adaptive lighting for data-driven non-line-of-sight 3d localization and object identification
CN206546159U (en) Microscopic three-dimensional measurement apparatus and system
US20200018592A1 (en) Energy optimized imaging system with synchronized dynamic control of directable beam light source and reconfigurably masked photo-sensor
Faccio Non-line-of-sight imaging
US11546508B1 (en) Polarization imaging system with super resolution fusion
CN113556476B (en) Active non-vision field array imaging method based on multi-point illumination
US20030164841A1 (en) System and method for passive three-dimensional data acquisition
AU2020408599A1 (en) Light field reconstruction method and system using depth sampling
Mathai et al. Transparent object reconstruction based on compressive sensing and super-resolution convolutional neural network
Zhao et al. Polarization-based approach for multipath interference mitigation in time-of-flight imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant