CN116129055A - Composite bionic ghost imaging method and system - Google Patents

Composite bionic ghost imaging method and system Download PDF

Info

Publication number
CN116129055A
CN116129055A CN202310126612.4A CN202310126612A CN116129055A CN 116129055 A CN116129055 A CN 116129055A CN 202310126612 A CN202310126612 A CN 202310126612A CN 116129055 A CN116129055 A CN 116129055A
Authority
CN
China
Prior art keywords
target
imaging
detector
value
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310126612.4A
Other languages
Chinese (zh)
Inventor
曹杰
张镐宇
郝群
崔焕�
姜玉秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202310126612.4A priority Critical patent/CN116129055A/en
Publication of CN116129055A publication Critical patent/CN116129055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/08Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
    • G02B26/0816Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
    • G02B26/0833Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a compound bionic ghost imaging method and a system, wherein the method comprises the steps of detecting two-dimensional motion of a target under a complex background based on the feedback type axial vibration retina technology, and acquiring azimuth information and operation angular velocity of the target; according to the azimuth information and the running angular speed of the target, carrying out target identification under the imaging-free condition on a signal in a specific direction; and determining a sub-field of view in which the detection target is positioned according to the target identification result, and performing variable resolution ghost imaging on the field of view. The invention realizes the ghost imaging technology with large field of view and high-efficiency perception performance by using a plurality of detectors, and has the functions of detection, identification and tracking integration.

Description

Composite bionic ghost imaging method and system
Technical Field
The invention belongs to the technical field of photoelectric detection and perception, and particularly relates to a composite bionic ghost imaging method and system.
Background
Photoelectric sensing technology has been one of the indispensable technologies in many application fields, such as: security monitoring, unmanned driving, intelligent manufacturing, identification of friend and foe, intelligent reconnaissance and other national defense civil fields. For the photoelectric sensing technology, the important links comprise three aspects, namely detection, identification and tracking according to the mutual sequence of the optical system and the target. These three parts are commonly used in various fields as a whole. However, due to the different functional properties of the parts, the functional requirements for the photoelectric sensing system are also different, for example: the detection is more focused on a large view field to search the target, and the target is identified through high-resolution imaging after the target is searched, so that the resolution is required to be higher in the identification stage, and further, after the target is identified, the target is focused on, so that the target is required to be kept up at a faster speed, namely, the tracking stage has better instantaneity. Therefore, the three-stage integrated function advantages of detection, identification and tracking are more prominent, and the method is more suitable for various scenes. The current photoelectric imaging system can meet the functional requirement to a certain extent, but the coverage is not good, and the reasons mainly comprise two aspects: on one hand, the functional requirements of each stage are different, and the imaging sensor has single function and is difficult to meet; on the other hand, the crossing of functions at different stages results in large redundancy of information to be processed, and real-time performance is more difficult. Therefore, how to develop a more effective photoelectric sensor and meet the requirements of detection, identification and tracking integrated application becomes a problem to be solved urgently.
In recent years, the ghost imaging technology realizes imaging with spatial resolution by matching a single-pixel detector with light intensity fluctuation, and the method has the advantages of simple structure and strong anti-interference capability, and is applied to various scenes, but because ghost imaging needs multiple correlation measurement, imaging efficiency still needs to be improved, researchers are researching how to improve ghost imaging efficiency, and especially, the realization of parallel ghost imaging by adopting a plurality of detectors becomes one of important ways for improving ghost imaging quality. Taking a four-quadrant detector as an example, 4 detectors are used for reconstructing an image of a target, and the reconstruction efficiency is improved by 4 times compared with that of a single detector. Meanwhile, the four-quadrant detector has excellent tracking performance, so that the ghost imaging mode becomes a potential mode with high-resolution imaging and high-efficiency tracking. However, for high-efficiency imaging with a large field of view, ghost imaging needs to overcome some technical bottlenecks. For large field imaging, biomimetic imaging of insect eyes provides good insight, for example: dragonflies can feel 360 degrees of scene. For efficient perception, the variable resolution characteristics of the human eye have dynamic adaptive resource allocation, always enabling the human eye to have resolution imaging capability for the target of interest. With the rapid development of computing capability and semiconductor industry, ghost imaging is used as representative to compute imaging, so that the method has more flexible configuration characteristics, and simultaneously provides a new solution for more intelligent detection, identification and tracking integrated application.
In view of the above, the invention combines the insect compound eye with the human eye variable resolution imaging perception to form the compound bionic ghost imaging, and provides different working modes at different stages, thereby realizing more matched perception capability.
Disclosure of Invention
The present invention has been made to solve the above-mentioned problems occurring in the prior art. Therefore, a compound bionic ghost imaging method and a compound bionic ghost imaging system are needed, and a ghost imaging technology with a large field of view and high-efficiency perception performance is realized by using a plurality of detectors, so that the compound bionic ghost imaging method and the compound bionic ghost imaging system have the functions of detection, identification and tracking integration.
According to a first aspect of the present invention, there is provided a composite biomimetic ghost imaging method, the method comprising:
based on the two-dimensional motion detection of the target under the complex background realized by using the feedback type axial vibration retina technology, the azimuth information and the running angular velocity of the target are obtained;
according to the azimuth information and the running angular speed of the target, carrying out target identification under the imaging-free condition on a signal in a specific direction;
and determining a sub-field of view in which the detection target is positioned according to the target identification result, and performing variable resolution ghost imaging on the field of view.
Further, the obtaining the azimuth information and the running angular velocity of the target based on the two-dimensional motion detection of the target under the complex background by using the feedback type axial vibration retina technology comprises the following steps:
when the target azimuth is detected, firstly, the output voltage values of the detector array of the target in each azimuth are recorded by collecting data in the corresponding view field, the photoelectric detection system is utilized to receive target information, voltage value sequences are obtained for targets in different azimuth based on different voltage output values generated by detectors in different positions, and azimuth information of the target is calculated according to the voltage value sequences.
Further, the calculating, according to the voltage value sequence, azimuth information of the target includes:
taking the voltage value sequence as an input I of a neural network in According to the ideal position I of the target sta Searching and inputting value I by training a neural network in To output value I sta The neural network weight and the excitation value corresponding to the process of the step (a), and the neural network weight and the excitation value obtained after learning are configured into the neural network,
based on the trained neural network, for the target with unknown position, according to the voltage value sequence corresponding to the target and the weight and excitation value of the neural network, the azimuth information of the target is calculated.
Further, the method for acquiring the azimuth information and the running angular velocity of the target based on the two-dimensional motion detection of the target under the complex background by using the feedback type axial vibration retina technology further comprises the following steps:
roughly calculating the angular velocity of the target according to the time interval between rising edges generated by the target flying through two adjacent single-pixel detectors, and obtaining the initial measurement velocity;
and based on the initial angular velocity, gradually feeding back and controlling the vibration velocity of the retina to match the vibration velocity with the target angular velocity.
Further, the step-by-step feedback control of the vibration speed of the retina based on the initial measured angular velocity to match the vibration speed with a target angular velocity includes:
assuming that the retina is provided with preliminary vibrationThe signal cross-correlation asymmetry degree of the single pixel detector with a certain dynamic frequency is a 0 Taking the minimum cross-correlation asymmetry a min =|a 0 Simultaneously, the minimum value of the adjustable retina vibration frequency is delta omega min The sign is positive, and is used for adjusting the retina vibration frequency, and the cross-correlation change trend identification flag is taken as an initial value 0;
fine tuning changes the retinal vibration frequency Δω min Calculating the degree of asymmetry a of the cross-correlation at this time 1 And a is carried out 1 Absolute value of (a) and set minimum value a min Comparison:
if |a 1 |<a min Setting the cross-correlation change trend flag to be 1 to indicate that the cross-correlation asymmetry is reduced and the frequency is further along delta omega min Direction change, taking cross correlation minimum value a min For the updated |a 1 I and repeating the loop from the fine tuning step until the frequency adjustment exceeds the matching frequency, resulting in a cross-correlation asymmetry of a 1 I is higher than a updated at the time min And finally, calculating the running angular speed of the target.
Further, the identifying the target under the imaging-free condition for the signal in a specific direction according to the azimuth information and the running angular velocity of the target includes:
and judging the signal characteristics of the detector in the specific direction according to the voltage signals of the detector acquired in the specific direction, comparing the signal characteristics with the signal characteristics of the ideal signal, and recognizing that the target in the specific direction is a preset required target under the condition that the similarity between the acquired signal and the ideal signal exceeds a certain threshold value.
Further, determining a sub-field of view in which the detection target is located according to the target identification result, and performing variable resolution ghost imaging on the field of view, including:
acquiring a central concave region of interest of the panoramic image, and setting a variable resolution annular speckle parameter according to logarithmic polar coordinate transformation;
building a human eye-like variable resolution annular speckle model to generate a human eye-like variable resolution annular speckle sequence;
according to the human eye-simulated variable resolution annular speckle sequence, combining the acquired sub-detector signals to perform ghost imaging reconstruction;
based on the results of the reconstruction of the plurality of adjacent projection units, three-dimensional imaging of the target is realized.
Further, the obtaining the foveal region of interest of the panoramic image, and setting the variable resolution annular speckle parameter according to the logarithmic polar transformation, includes:
dividing a projection pattern into a central concave region and an edge region according to a mode of variable resolution arrangement of human eye retina receptors, wherein the central concave region adopts uniform high-resolution Cartesian sampling, the edge region adopts logarithmic polar coordinate variable resolution sampling, and the image of the edge region is compressed to different degrees by utilizing the characteristic of a logarithmic polar coordinate model;
and determining the center position of the human-simulated eye speckle and the radius of the center high-resolution area according to the range of the imaging area where the target is located.
Further, the building of the human-eye-change-simulated resolution annular speckle model generates a human-eye-change-simulated resolution annular speckle sequence, which comprises the following steps:
let r denote the distance of the pixel from the center position, the outer annular radius r of the foveal region 0 The inner part is the central area, r 0 The outside is an edge area;
according to the difference of the polar diameter and the polar angle, the edge area is divided into P rings, and Q pixels are arranged in each ring;
in the human eye-simulated logarithmic polar model, assuming that p and q represent the p-th ring and the q-th pixel, respectively, the edge region variable resolution structure is calculated by the following equation set:
Figure BDA0004082355630000041
wherein r is p Represents the radius of the circular ring where the p-th ring pixel is positioned, r 1 Represents the radius, theta, of the ring where the 1 st ring pixel is located q Representing the angle corresponding to the q-th pixel, and epsilon represents the inter-ring growth coefficient;
and generating the simulated human eye space variable resolution projection pattern according to the simulated human eye logarithmic polar coordinate model and the uniform projection pattern generation mode.
Further, according to the human eye-simulated variable resolution annular speckle sequence, combining the acquired sub-detector signals, and performing ghost imaging reconstruction through the following formula:
PI=Q
wherein P is a human eye speckle imitation sequence, I is an image reconstructed by corresponding ghost imaging, and Q is a total light intensity value received by the sub-detector.
Further, the three-dimensional imaging of the target is realized based on the reconstructed results of the plurality of adjacent projection units, including:
carrying out image reconstruction on detector signals corresponding to adjacent projection units by utilizing a computing ghost imaging algorithm to obtain a reconstructed multi-view image sequence;
calculating distance values of different targets by using a stereo matching method based on the reconstructed multi-view image sequence, so as to realize three-dimensional imaging of a scene;
the stereo matching method comprises the following steps:
image I of sub-detector k Image I of adjacent sub-detector k+1 Homography transformation is carried out, and the most matched points are searched on the corresponding polar lines;
and respectively calculating the similarity of pixel values in a search window in the view corresponding to the k+1th sub-detector by taking the pixel points in the view corresponding to the k sub-detector as a reference, judging the pixel points as corresponding matching points if a set threshold value is met, and acquiring three-dimensional information by utilizing the relation between the parallax values and the depth values according to the parallax value d between the pixel points.
According to a second aspect of the present invention, there is provided a composite biomimetic ghost imaging system, the system comprising:
the single-pixel detector combination is used for acquiring optical signals of imaging planes at different angles;
a digital micromirror device for modulating the spatial light information;
a light source for projecting white light to the digital micromirror device;
the data acquisition device is used for converting the light intensity value of the real single-pixel detector into an electric signal;
the optical fiber and the lens are used for projecting speckles of the projection unit and receiving optical signals of the detector;
the host computer is configured to:
based on the two-dimensional motion detection of the target under the complex background realized by using the feedback type axial vibration retina technology, the azimuth information and the running angular velocity of the target are obtained;
according to the azimuth information and the running angular speed of the target, carrying out target identification under the imaging-free condition on a signal in a specific direction;
and determining a sub-field of view in which the detection target is positioned according to the target identification result, and performing variable resolution ghost imaging on the field of view.
The invention has at least the following beneficial effects:
(1) The mode is flexible and various. The multiple detectors can be used for large-scale detection according to a ghost imaging calculation mode, and can also perform high-resolution imaging on the detected target, so that the target can be identified.
(2) The large field of view detection capability is strong. And a compound eye structure is adopted to form an arrangement structure with the same period, and the arrangement structure is arranged on a curved surface, so that the large-view-field detection is facilitated.
(3) The redundant compression capability is strong. High-resolution speckle for the target is formed by a variable-resolution speckle layout similar to that of the retina of the human eye, and low-resolution speckle modulation is performed on non-interested targets, so that compression of redundant data is facilitated.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The same reference numerals with letter suffixes or different letter suffixes may represent different instances of similar components. The accompanying drawings illustrate various embodiments by way of example in general and not by way of limitation, and together with the description and claims serve to explain the inventive embodiments. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Such embodiments are illustrative and not intended to be exhaustive or exclusive of the present apparatus or method.
FIG. 1 is a flow chart of a composite bionic ghost imaging method according to an embodiment of the present invention;
FIG. 2 is a flow chart of target detection in a composite bionic ghost imaging method according to an embodiment of the present invention;
FIG. 3 is a flow chart of determining a sub-field of view in which a detection target is located according to a target recognition result and performing variable resolution ghost imaging on the field of view in a composite bionic ghost imaging method according to an embodiment of the present invention;
fig. 4 is a two-dimensional schematic diagram of a composite bionic ghost imaging system according to an embodiment of the invention, wherein an optical fiber-100, a light source-101, a digital micromirror device-102, a four-quadrant detector-103, a data acquisition card-104, and an upper computer-105.
Detailed Description
The present invention will be described in detail below with reference to the drawings and detailed description to enable those skilled in the art to better understand the technical scheme of the present invention. Embodiments of the present invention will be described in further detail below with reference to the drawings and specific examples, but not by way of limitation. The order in which the steps are described herein by way of example should not be construed as limiting if there is no necessity for a relationship between each other, and it should be understood by those skilled in the art that the steps may be sequentially modified without disrupting the logic of each other so that the overall process is not realized.
Referring to fig. 1, fig. 1 shows a flowchart of a composite bionic ghost imaging method according to an embodiment of the invention. The method comprises the following steps:
step S100, based on the two-dimensional motion detection of the target under the complex background by using the feedback type axial vibration retina technology, the azimuth information and the running angular velocity of the target are obtained.
In the composite bionic ghost imaging system, 4 optical fiber arrays connected with a projection unit and a detector are arranged in an array structure (including a plane, a curved surface and the like, and the system is arranged corresponding to 4 sub-fields of view), and two-dimensional motion detection of a target under a complex background is realized by using a feedback type axial vibration retina technology, wherein the two-dimensional motion detection comprises measuring of the target azimuth and running angular velocity. When the target azimuth is detected, firstly, the experimental data are collected, the light intensity values received by the optical fibers of the target under 4 azimuth are recorded, the photoelectric detection system is utilized to receive target information, for targets in different azimuth, the optical signals collected by the optical fibers under different positions can generate different voltage output values, the measured voltage value sequences are input into an algorithm, and the position information of the target is calculated and obtained.
In some embodiments, for targets at different locations, the optical signals collected by the optical fibers at different locations will produce different voltage output values, which are used as the input I of the neural network in Since we know in advance the ideal position I of the target in the experiment sta Then find and input value I by training the neural network in To output value I sta The obtained value after learning is used as the inherent parameter input system of the neural network, and after training, the target position can be calculated by the voltage output of the detector and the weight and the excitation value of the neural network for the target with unknown position.
In addition, when the target flies into a certain sub-field of view, the output signal of the small eye will be superimposed with a rising edge on the basis of the slowly varying background light, and the width of the rising edge is related to the angular speed of the target and also related to the detection performance of the photodetector. If the target continuously flies through the two eyes A and B, the time difference and the corresponding angle difference of rising edges of the two sub-detectors can be used for calculating the angular speed of the target flying.
In some embodiments, as shown in fig. 2, a flowchart of target detection in a composite bionic ghost imaging method disclosed in this embodiment includes the following steps:
the first part roughly calculates the angular velocity of the target according to the time interval between rising edges generated when the target flies through quadrants of two adjacent detectors, namely respectively records the time when the output signals of the two sub-detectors exceed a certain threshold value, subtracts the time interval to obtain the time interval, and divides the time interval by the angular interval between the sub-detectors to obtain the initial velocity.
And the second part, starting from the initial angular velocity, gradually feeding back and controlling the vibration velocity of the retina to match with the target angular velocity.
Assuming that the signal cross-correlation asymmetry degree of a single pixel detector corresponding to a certain retina preliminary vibration frequency is a 0 Taking the minimum cross-correlation asymmetry a min =|a 0 | a. The invention relates to a method for producing a fibre-reinforced plastic composite. At the same time, the minimum value of the adjustable retina vibration frequency is delta omega min The sign is positive for adjusting the retinal vibration frequency. And taking the cross-correlation change trend identification flag as an initial value of 0. Thereafter, fine tuning is attempted to change the retinal vibration frequency Δω min Calculating the degree of asymmetry a of the cross-correlation at this time 1 And the absolute value and the set minimum value a min And (5) comparing. If |a 1 |<a min The frequency change is shown to enable the vibration frequency to be more closely matched with the target angular velocity, so that the cross-correlation change trend identification flag is set to be 1, the cross-correlation asymmetry is reduced, and the frequency needs to be further along delta omega min The direction change can complete the matching. Then take the cross-correlation minimum value a min For the updated |a 1 I and repeating the loop from the fine tuning step until the frequency adjustment exceeds the matching frequency, resulting in a cross-correlation asymmetry of a 1 I is higher than a updated at the time min Finally, the angular velocity of the target motion is calculated.
Step 200, performing object identification under the imaging-free condition on a signal in a specific direction according to the azimuth information and the running angular velocity of the object.
In some embodiments, the signal in a specific direction is subject to target identification under imaging-free conditions based on the results of target detection. And judging the signal characteristics of the detector in the specific direction according to the voltage signals of the detector acquired in the specific direction, comparing the signal characteristics with the signals under ideal conditions, and identifying that the target in the specific direction is a preset required target after the similarity between the acquired signals and the ideal signals exceeds a certain threshold value.
Finally, in step S300, according to the target recognition result, a sub-field of view in which the detection target is located is determined, and variable resolution ghost imaging is performed on the field of view.
In some embodiments, as shown in fig. 3, the determining a sub-field of view in which the detection target is located according to the target recognition result, and performing variable resolution ghost imaging on the field of view specifically includes:
step S301, acquiring a central concave region of interest of the panoramic image, and setting variable resolution annular speckle parameters according to logarithmic polar coordinate transformation.
Illustratively, in this step, a foveal region of interest of the panoramic image is selected according to the application requirements, and the variable resolution annular speckle parameters are set according to a logarithmic polar transformation. The projection pattern is divided into a foveal region and an edge region in a manner simulating the variable resolution arrangement of human eye retina receptors. The central concave area adopts the traditional uniform high-resolution Cartesian sampling, so that the redundancy characteristic that the logarithmic polar coordinates are in the center is avoided; and the edge region adopts logarithmic polar coordinate variable resolution sampling, and the image of the edge region is compressed to different degrees by utilizing the characteristic of a logarithmic polar coordinate model. The two can jointly realize the imaging characteristic of the human eye imitation center high resolution and the edge low resolution. And (3) selecting a proper center position of the human-eye-simulated speckle and the radius of the center high-resolution area according to the range of the imaging area where the target obtained in the step (II) is located, wherein the center position and the radius of the center high-resolution area are dynamic variables and are related to the detection and identification results in the step (S100) and the step (S200).
Step S302, a human eye-like variable resolution annular speckle model is constructed, and a human eye-like variable resolution annular speckle sequence is generated.
For example, in the human eye-like variable resolution annular speckle, the central high-resolution sampling area is generated by adopting a traditional Cartesian system, so that the problem of oversampling of the logarithmic polar coordinates in the central area can be avoided; the edge area is generated by adopting a human eye simulated logarithmic polar coordinate model. If it is usedr represents the distance of the pixel from the center position, the outer annular radius r of the foveal region 0 The inner part is the central area, r 0 The outside is the edge area.
According to the difference of the polar diameter and the polar angle, the edge region can be divided into P rings, and Q pixels are arranged in each ring.
In the human eye-simulated logarithmic polar model, assuming that p and q represent the p-th ring and the q-th pixel, respectively, the edge region variable resolution structure can be calculated by the following equation set:
Figure BDA0004082355630000091
wherein r is p Represents the radius of the circular ring where the p-th ring pixel is positioned, r 1 Represents the radius, theta, of the ring where the 1 st ring pixel is located q Represents the angle corresponding to the q-th pixel and epsilon represents the inter-loop growth coefficient.
According to the human eye simulated logarithmic polar coordinate model and the uniform projection pattern generation mode, the human eye simulated space variable resolution projection pattern can be generated on the DMD.
Step S303, carrying out ghost image reconstruction according to the human eye-simulated variable resolution annular speckle sequence and the acquired sub-detector signals.
Illustratively, a compressed sensing algorithm is selected to reconstruct the acquired signals for better and faster reconstruction to the ghost imaging results. Since a single pixel detector is used in computing ghost imaging, image reconstruction can be achieved using a single pixel imaging algorithm as well, which can be expressed as:
PI=Q
wherein P is a human eye speckle imitation sequence, I is an image reconstructed by corresponding ghost imaging, and Q is a total light intensity value received by the sub-detector. In order to acquire the target image more efficiently, the speckle is optimized by adopting a parallel algorithm, the whole projection area is divided into corresponding parts, and the same set speckle is projected at the same time, so that the target reconstruction can be realized more rapidly.
Step S304, based on the reconstructed results of the plurality of adjacent projection units, three-dimensional imaging of the target is achieved.
The three-dimensional imaging of the target can be realized by the following method specifically based on the reconstructed results of the plurality of adjacent projection units:
and (3) carrying out image reconstruction on the detector signals corresponding to the adjacent projection units by using a calculated ghost imaging algorithm to obtain a corresponding result. Based on the reconstructed multi-view image sequence, calculating distance values of different targets by using a stereo matching method, and realizing three-dimensional imaging of a scene.
The stereo matching method comprises the following steps:
image I of sub-detector k Image I of adjacent sub-detector k+1 Homography transformation is carried out, and the most matched points are searched on the corresponding polar lines;
and respectively calculating the similarity of pixel values in a search window in the view corresponding to the k+1th sub-detector by taking the pixel points in the view corresponding to the k sub-detector as a reference, judging the pixel points as corresponding matching points if a set threshold value is met, and acquiring three-dimensional information by utilizing the relation between the parallax values and the depth values according to the parallax value d between the pixel points.
As shown in fig. 4, a composite bionic ghost imaging system according to an embodiment of the present invention includes an optical fiber 100, a light source 101, a digital micromirror device 102, a four-quadrant detector 103, a data acquisition card 104, and an upper computer 105.
The optical fiber 100 is used to transmit optical signals, including speckle generated by a projection digital micromirror device, and to receive optical signals to a four-quadrant detector 103.
Each detector target surface in the four-quadrant detector 103 is used for receiving optical signals in different directions, and the overlapping proportion of the fields of view is smaller, so that a larger field of view is obtained.
The digital micromirror device 102 is used to modulate spatial light information.
The light source 101 is used to project white light onto the dmd.
The data acquisition device 104 is configured to convert the light intensity value of the real single-pixel detector into an electrical signal.
The upper computer 105 is configured to output a control instruction, and specifically configured to:
based on the two-dimensional motion detection of the target under the complex background realized by using the feedback type axial vibration retina technology, the azimuth information and the running angular velocity of the target are obtained;
according to the azimuth information and the running angular speed of the target, carrying out target identification under the imaging-free condition on a signal in a specific direction;
and determining a sub-field of view in which the detection target is positioned according to the target identification result, and performing variable resolution ghost imaging on the field of view.
In some embodiments, the host computer 105 is further configured to:
when the azimuth of the target is detected, firstly, the output voltage values of the detector array of the target in each azimuth are recorded by collecting data in a view field, the photoelectric detection system is utilized to receive the target information, voltage value sequences are obtained for the targets in different azimuth based on different voltage output values generated by the detectors in different positions, and the azimuth information of the target is calculated according to the voltage value sequences.
In some embodiments, the host computer 105 is further configured to:
taking the voltage value sequence as an input I of a neural network in According to the ideal position I of the target sta Searching and inputting value I by training a neural network in To output value I sta The neural network weight and the excitation value corresponding to the process of the step (a), and the neural network weight and the excitation value obtained after learning are configured into the neural network,
based on the trained neural network, for the target with unknown position, according to the voltage value sequence corresponding to the target and the weight and excitation value of the neural network, the azimuth information of the target is calculated.
In some embodiments, the host computer 105 is further configured to:
roughly calculating the angular velocity of the target according to the time interval between rising edges generated by the target flying through two adjacent single-pixel detectors, and obtaining the initial measurement velocity;
and based on the initial angular velocity, gradually feeding back and controlling the vibration velocity of the retina to match the vibration velocity with the target angular velocity.
In some embodiments, the host computer 105 is further configured to:
assuming that the signal cross-correlation asymmetry degree of a single pixel detector corresponding to a certain retina preliminary vibration frequency is a 0 Taking the minimum cross-correlation asymmetry a min =|a 0 Simultaneously, the minimum value of the adjustable retina vibration frequency is delta omega min The sign is positive, and is used for adjusting the retina vibration frequency, and the cross-correlation change trend identification flag is taken as an initial value 0;
fine tuning changes the retinal vibration frequency Δω min Calculating the degree of asymmetry a of the cross-correlation at this time 1 And a is carried out 1 Absolute value of (a) and set minimum value a min Comparison:
if |a 1 |<a min Setting the cross-correlation change trend flag to be 1 to indicate that the cross-correlation asymmetry is reduced and the frequency is further along delta omega min Direction change, taking cross correlation minimum value a min For the updated |a 1 I and repeating the loop from the fine tuning step until the frequency adjustment exceeds the matching frequency, resulting in a cross-correlation asymmetry of a 1 I is higher than a updated at the time min And finally, calculating the running angular speed of the target.
In some embodiments, the host computer 105 is further configured to:
and judging the signal characteristics of the detector in the specific direction according to the voltage signals of the detector acquired in the specific direction, comparing the signal characteristics with the signal characteristics of the ideal signal, and recognizing that the target in the specific direction is a preset required target under the condition that the similarity between the acquired signal and the ideal signal exceeds a certain threshold value.
In some embodiments, the host computer 105 is further configured to:
acquiring a central concave region of interest of the panoramic image, and setting a variable resolution annular speckle parameter according to logarithmic polar coordinate transformation;
building a human eye-like variable resolution annular speckle model to generate a human eye-like variable resolution annular speckle sequence;
according to the human eye-simulated variable resolution annular speckle sequence, combining the acquired sub-detector signals to perform ghost imaging reconstruction;
based on the results of the reconstruction of the plurality of adjacent projection units, three-dimensional imaging of the target is realized.
In some embodiments, the host computer 105 is further configured to:
dividing a projection pattern into a central concave region and an edge region according to a mode of variable resolution arrangement of human eye retina receptors, wherein the central concave region adopts uniform high-resolution Cartesian sampling, the edge region adopts logarithmic polar coordinate variable resolution sampling, and the image of the edge region is compressed to different degrees by utilizing the characteristic of a logarithmic polar coordinate model;
and determining the center position of the human-simulated eye speckle and the radius of the center high-resolution area according to the range of the imaging area where the target is located.
In some embodiments, the host computer 105 is further configured to:
let r denote the distance of the pixel from the center position, the outer annular radius r of the foveal region 0 The inner part is the central area, r 0 The outside is an edge area;
according to the difference of the polar diameter and the polar angle, the edge area is divided into P rings, and Q pixels are arranged in each ring;
in the human eye-simulated logarithmic polar model, assuming that p and q represent the p-th ring and the q-th pixel, respectively, the edge region variable resolution structure is calculated by the following equation set:
Figure BDA0004082355630000121
wherein r is p Represents the radius of the circular ring where the p-th ring pixel is positioned, r 1 Represents the radius, theta, of the ring where the 1 st ring pixel is located q Representing the angle corresponding to the q-th pixel, and epsilon represents the inter-ring growth coefficient;
and generating the simulated human eye space variable resolution projection pattern according to the simulated human eye logarithmic polar coordinate model and the uniform projection pattern generation mode.
In some embodiments, the upper computer 105 is further configured to reconstruct ghost images from the human eye-like variegated circular speckle sequence in combination with the acquired sub-detector signals by the following formula:
PI=Q
wherein P is a human eye speckle imitation sequence, I is an image reconstructed by corresponding ghost imaging, and Q is a total light intensity value received by the sub-detector.
In some embodiments, the host computer 105 is further configured to:
carrying out image reconstruction on detector signals corresponding to adjacent projection units by utilizing a computing ghost imaging algorithm to obtain a reconstructed multi-view image sequence;
calculating distance values of different targets by using a stereo matching method based on the reconstructed multi-view image sequence, so as to realize three-dimensional imaging of a scene;
the stereo matching method comprises the following steps:
image I of sub-detector k Image I of adjacent sub-detector k+1 Homography transformation is carried out, and the most matched points are searched on the corresponding polar lines;
and respectively calculating the similarity of pixel values in a search window in the view corresponding to the k+1th sub-detector by taking the pixel points in the view corresponding to the k sub-detector as a reference, judging the pixel points as corresponding matching points if a set threshold value is met, and acquiring three-dimensional information by utilizing the relation between the parallax values and the depth values according to the parallax value d between the pixel points.
It should be noted that the composite bionic ghost imaging system in this embodiment and the previously described composite bionic ghost imaging method belong to the same technical idea, which can produce the same technical effects, and are not described here again.
Furthermore, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of the various embodiments across), adaptations or alterations as pertains to the present invention. Elements in the claims are to be construed broadly based on the language employed in the claims and are not limited to examples described in the present specification or during the practice of the present application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the invention. This is not to be interpreted as an intention that the features of the claimed invention are essential to any of the claims. Rather, inventive subject matter may lie in less than all features of a particular inventive embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (10)

1. A composite biomimetic ghost imaging method, the method comprising:
based on the two-dimensional motion detection of the target under the complex background realized by using the feedback type axial vibration retina technology, the azimuth information and the running angular velocity of the target are obtained;
according to the azimuth information and the running angular speed of the target, carrying out target identification under the imaging-free condition on a signal in a specific direction;
and determining a sub-field of view in which the detection target is positioned according to the target identification result, and performing variable resolution ghost imaging on the field of view.
2. The method according to claim 1, wherein the obtaining the azimuth information and the running angular velocity of the target based on the two-dimensional motion detection of the target in the complex background using the feedback axial vibration retina technology comprises:
when the azimuth of the target is detected, firstly, the output voltage values of the detector array of the target in each azimuth are recorded by collecting data in a view field, the photoelectric detection system is utilized to receive the target information, voltage value sequences are obtained for the targets in different azimuth based on different voltage output values generated by the detectors in different positions, and the azimuth information of the target is calculated according to the voltage value sequences.
3. The method according to claim 2, wherein calculating the azimuth information of the target from the voltage value sequence includes:
taking the voltage value sequence as an input I of a neural network in According to the ideal position I of the target sta Searching and inputting value I by training a neural network in To output value I sta The neural network weight and the excitation value corresponding to the process of the step (a), and the neural network weight and the excitation value obtained after learning are configured into the neural network,
based on the trained neural network, for the target with unknown position, according to the voltage value sequence corresponding to the target and the weight and excitation value of the neural network, the azimuth information of the target is calculated.
4. A method according to claim 2 or 3, wherein the acquiring the azimuth information and the running angular velocity of the target based on the two-dimensional motion detection of the target in a complex background using a feedback type axial vibration retina technique further comprises:
roughly calculating the angular velocity of the target according to the time interval between rising edges generated by the target flying through two adjacent single-pixel detectors, and obtaining the initial measurement velocity;
and based on the initial angular velocity, gradually feeding back and controlling the vibration velocity of the retina to match the vibration velocity with the target angular velocity.
5. The method of claim 4, wherein the step-wise feedback controlling the vibration speed of the retina based on the initial measured angular velocity to match the vibration speed to a target angular velocity comprises:
assuming that the signal cross-correlation asymmetry degree of a single pixel detector corresponding to a certain retina preliminary vibration frequency is a 0 Taking the minimum cross-correlation asymmetry a min =|a 0 Simultaneously, the minimum value of the adjustable retina vibration frequency is delta omega min The sign is positive, and is used for adjusting the retina vibration frequency, and the cross-correlation change trend identification flag is taken as an initial value 0;
fine tuning changes the retinal vibration frequency Δω min Calculating the degree of asymmetry a of the cross-correlation at this time 1 And a is carried out 1 Absolute value of (a) and set minimum value a min Comparison:
if |a 1 |<a min Setting the cross-correlation change trend flag to be 1 to indicate that the cross-correlation asymmetry is reduced and the frequency is further along delta omega min Direction change, taking cross correlation minimum value a min For the updated |a 1 I and repeating the loop from the fine tuning step until the frequency adjustment exceeds the matching frequency, resulting in a cross-correlation asymmetry of a 1 I is higher than a updated at the time min And finally, calculating the running angular speed of the target.
6. The method according to claim 1, wherein the performing object recognition under imaging-free condition on the signal in a specific direction according to the azimuth information and the running angular velocity of the object comprises:
and judging the signal characteristics of the detector in the specific direction according to the voltage signals of the detector acquired in the specific direction, comparing the signal characteristics with the signal characteristics of the ideal signal, and recognizing that the target in the specific direction is a preset required target under the condition that the similarity between the acquired signal and the ideal signal exceeds a certain threshold value.
7. The method of claim 1, wherein determining a sub-field of view in which the detected target is located based on the target recognition result, and performing variable resolution ghosting on the field of view, comprises:
acquiring a central concave region of interest of the panoramic image, and setting a variable resolution annular speckle parameter according to logarithmic polar coordinate transformation;
building a human eye-like variable resolution annular speckle model to generate a human eye-like variable resolution annular speckle sequence;
according to the human eye-simulated variable resolution annular speckle sequence, combining the acquired sub-detector signals to perform ghost imaging reconstruction;
based on the results of the reconstruction of the plurality of adjacent projection units, three-dimensional imaging of the target is realized.
8. The method of claim 7, wherein acquiring the foveal region of interest of the panoramic image and setting the variegated annular speckle parameter according to a logarithmic polar transformation comprises:
dividing a projection pattern into a central concave region and an edge region according to a mode of variable resolution arrangement of human eye retina receptors, wherein the central concave region adopts uniform high-resolution Cartesian sampling, the edge region adopts logarithmic polar coordinate variable resolution sampling, and the image of the edge region is compressed to different degrees by utilizing the characteristic of a logarithmic polar coordinate model;
and determining the center position of the human-simulated eye speckle and the radius of the center high-resolution area according to the range of the imaging area where the target is located.
9. The method of claim 7, wherein the performing three-dimensional imaging of the object based on the reconstructed results of the plurality of adjacent projection units comprises:
carrying out image reconstruction on detector signals corresponding to adjacent projection units by utilizing a computing ghost imaging algorithm to obtain a reconstructed multi-view image sequence;
calculating distance values of different targets by using a stereo matching method based on the reconstructed multi-view image sequence, so as to realize three-dimensional imaging of a scene;
the stereo matching method comprises the following steps:
image I of sub-detector k Image I of adjacent sub-detector k+1 Homography transformation is carried out, and the most matched points are searched on the corresponding polar lines;
and respectively calculating the similarity of pixel values in a search window in the view corresponding to the k+1th sub-detector by taking the pixel points in the view corresponding to the k sub-detector as a reference, judging the pixel points as corresponding matching points if a set threshold value is met, and acquiring three-dimensional information by utilizing the relation between the parallax values and the depth values according to the parallax value d between the pixel points.
10. A composite biomimetic ghost imaging system, the system comprising:
the single-pixel detector combination is used for acquiring optical signals of imaging planes at different angles;
a digital micromirror device for modulating the spatial light information;
a light source for projecting white light to the digital micromirror device;
the data acquisition device is used for converting the light intensity value of the real single-pixel detector into an electric signal;
the optical fiber and the lens are used for projecting speckles of the projection unit and receiving optical signals of the detector;
the host computer is configured to:
based on the two-dimensional motion detection of the target under the complex background realized by using the feedback type axial vibration retina technology, the azimuth information and the running angular velocity of the target are obtained;
according to the azimuth information and the running angular speed of the target, carrying out target identification under the imaging-free condition on a signal in a specific direction;
and determining a sub-field of view in which the detection target is positioned according to the target identification result, and performing variable resolution ghost imaging on the field of view.
CN202310126612.4A 2023-02-06 2023-02-06 Composite bionic ghost imaging method and system Pending CN116129055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310126612.4A CN116129055A (en) 2023-02-06 2023-02-06 Composite bionic ghost imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310126612.4A CN116129055A (en) 2023-02-06 2023-02-06 Composite bionic ghost imaging method and system

Publications (1)

Publication Number Publication Date
CN116129055A true CN116129055A (en) 2023-05-16

Family

ID=86306136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310126612.4A Pending CN116129055A (en) 2023-02-06 2023-02-06 Composite bionic ghost imaging method and system

Country Status (1)

Country Link
CN (1) CN116129055A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201691A (en) * 2023-11-02 2023-12-08 湘江实验室 Panoramic scanning associated imaging method based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201691A (en) * 2023-11-02 2023-12-08 湘江实验室 Panoramic scanning associated imaging method based on deep learning
CN117201691B (en) * 2023-11-02 2024-01-09 湘江实验室 Panoramic scanning associated imaging method based on deep learning

Similar Documents

Publication Publication Date Title
Bai et al. Radar-based human gait recognition using dual-channel deep convolutional neural network
US20230288703A1 (en) Methods and apparatuses for corner detection using neural network and corner detector
CN110675418B (en) Target track optimization method based on DS evidence theory
US9317762B2 (en) Face recognition using depth based tracking
CN106056050B (en) Multi-view gait recognition method based on self-adaptive three-dimensional human motion statistical model
CN109934117B (en) Pedestrian re-identification detection method based on generation of countermeasure network
Inoue et al. Transfer learning from synthetic to real images using variational autoencoders for precise position detection
CN111797716A (en) Single target tracking method based on Siamese network
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
US9906767B2 (en) Apparatus and method for digital holographic table top display
CN104813339A (en) Methods, devices and systems for detecting objects in a video
Roheda et al. Cross-modality distillation: A case for conditional generative adversarial networks
CN102622732A (en) Front-scan sonar image splicing method
CN103903237A (en) Dual-frequency identification sonar image sequence splicing method
CN109635661B (en) Far-field wireless charging receiving target detection method based on convolutional neural network
Liu et al. Video image target monitoring based on RNN-LSTM
CN116129055A (en) Composite bionic ghost imaging method and system
CN110334701A (en) Collecting method based on deep learning and multi-vision visual under the twin environment of number
CN110517309A (en) A kind of monocular depth information acquisition method based on convolutional neural networks
EP3227704A1 (en) Method for tracking a target acoustic source
CN109671031A (en) A kind of multispectral image inversion method based on residual error study convolutional neural networks
Li et al. Weak moving object detection in optical remote sensing video with motion-drive fusion network
Xu et al. Dynamic camera configuration learning for high-confidence active object detection
JP3272584B2 (en) Region extraction device and direction detection device using the same
CN105374043B (en) Visual odometry filtering background method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination