CN113192154B - Underwater ghost imaging system based on edge calculation and deep learning image reconstruction method - Google Patents
Underwater ghost imaging system based on edge calculation and deep learning image reconstruction method Download PDFInfo
- Publication number
- CN113192154B CN113192154B CN202110594841.XA CN202110594841A CN113192154B CN 113192154 B CN113192154 B CN 113192154B CN 202110594841 A CN202110594841 A CN 202110594841A CN 113192154 B CN113192154 B CN 113192154B
- Authority
- CN
- China
- Prior art keywords
- light
- image
- light intensity
- target object
- modulation device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013135 deep learning Methods 0.000 title claims abstract description 33
- 238000004364 calculation method Methods 0.000 title claims abstract description 28
- 230000003287 optical effect Effects 0.000 claims abstract description 16
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 7
- 238000003062 neural network model Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 14
- 238000012545 processing Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012827 research and development Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application discloses an underwater ghost imaging system based on edge calculation and a deep learning image reconstruction method, wherein the system comprises the following steps: light source: for emitting light to an effective modulation region of the light modulation device; light modulation device: the device is used for modulating light rays through speckle and reflecting the modulated light rays, so that the reflected light rays pass through the projection lens along the optical axis of the projection lens and then are emitted to a target object in a water body; the converging lens is used for: converging light reflected by the target; the light intensity detector is used for: collecting the light intensity information of the converged light rays; the first end of the edge computing module is connected with the optical modulation device and is used for sending speckle to the optical modulation device; the second end of the edge computing module is connected with the light intensity detector and used for controlling the light intensity detector to acquire light intensity information, and an image of the target object is acquired through a deep learning image reconstruction algorithm according to the light intensity information. The problems of large calculated amount, low imaging resolution and poor imaging quality in the prior art are solved.
Description
Technical Field
The application relates to the technical field of underwater imaging, in particular to an underwater ghost imaging system based on edge calculation and a deep learning image reconstruction method.
Background
Underwater imaging technology is an important means of understanding, developing and utilizing the ocean. It becomes very difficult to obtain high resolution, high definition images of underwater objects due to absorption and scattering of light by the body of water. Compared with the traditional CCD and CMOS imaging technology, the ghost imaging technology can image in a weak light environment and a scattering medium, and has important application value in underwater imaging. However, the current underwater ghost imaging technology has the problems of large calculated amount, low imaging resolution and poor imaging quality, and restricts the development of the underwater ghost imaging technology.
In recent years, edge computation and deep learning have been widely used in the fields of video and image processing. The edge computing can provide network, computing, storage, application and other functions at the data source end. And the processing process is carried out at the local edge end, and the data calculation and processing efficiency is greatly improved by matching with the cloud calculation technology. The deep learning can quickly reconstruct a scene high-resolution image, and detect, identify, classify and the image. The method is used for solving the problems of large calculated amount, low imaging resolution and poor imaging quality in underwater ghost imaging.
Disclosure of Invention
The embodiment of the application provides an underwater ghost imaging system based on edge calculation and a deep learning image reconstruction method, which are used for solving the technical problems of large calculated amount, low imaging resolution and poor imaging quality in the existing underwater ghost imaging.
In view of this, a first aspect of the present application provides an edge-computing-based underwater ghosting imaging system, the system comprising:
the device comprises a light source, a light modulation device, a projection lens, a converging lens, a light intensity detector, an edge computing module and a cloud computing center;
the first end of the edge computing module is connected with the optical modulation device and is used for sending speckle to the optical modulation device, and the speckle is that: fourier sinusoidal speckle;
the light source: for emitting light towards an effective modulation region of the light modulation device;
the light modulation device: the device is used for modulating light rays through speckle and reflecting the modulated light rays, so that the reflected light rays pass through the projection lens along the optical axis of the projection lens and then are emitted to a target object in a water body;
the converging lens: for converging light reflected by the object;
the light intensity detector: the light intensity information acquisition module is used for acquiring the light intensity information of the converged light rays;
the second end of the edge computing module is connected with the light intensity detector and is used for controlling the light intensity detector to acquire light intensity information and acquiring an image of a target object through a deep learning image reconstruction algorithm according to the light intensity information;
the cloud computing center is in communication connection with the edge computing module;
the cloud computing center: and acquiring the image of the target object through a deep learning image reconstruction algorithm according to the light intensity information when the pixel size of the image of the target object is larger than a preset size.
Optionally, the method further comprises: the cloud storage platform is respectively in communication connection with the cloud computing center and the edge computing module;
the cloud storage platform is used for: and storing the image of the target object.
Optionally, the method further comprises: the display terminal is in communication connection with the cloud storage platform;
the display terminal is used for: and receiving the image of the target object sent by the cloud storage platform and displaying the image of the target object.
Optionally, the converging lens is mounted directly in front of the light intensity detector such that a focal point of the converging lens is at a center of an effective detection area of the light intensity detector.
A second aspect of the present application provides a deep learning image reconstruction method applied to the underwater ghost imaging system based on edge calculation according to the first aspect, including:
loading a speckle sequence onto a light modulation device to modulate light, and shooting the modulated light to a target object through a projection lens, wherein the speckle is as follows: fourier sinusoidal speckle;
detecting the light intensity of the light converged to the light intensity detector by the converging lens to obtain a light intensity sequence;
by means of data setsTraining the neural network model, and adding +.>A sheet noise level diagram, wherein->The pixel values of the tensor noise level map are all +.>,/>,Is at->Randomly generating in a range, and optimizing a loss function by using an ADAM function until training reaches preset times to obtain trained neural network model parameters;
inputting the light intensity sequence into a trained neural network model, and outputting an image of a target object;
the above representation uses hidden functionsIn one-dimensional light field intensity sequence->And output targetImage of object->Mapping relation is established between->Representing neural network model parameters, < >>Indicate->The original target object image used for training is used for training,is corresponding to the target object image->Is a one-dimensional sequence of light field intensities,/>Is a one-dimensional sequence of light field intensities.
taking outZhang Xiangsu the size is->As an original object image, for the +.>Original target image->Obtaining a one-dimensional light field intensity sequence by calculation>;
In the method, in the process of the invention,indicate->Individual light field intensity values,/->,/>Is the total number of measurements.
Optionally, the neural network model parameters are expressed as:
in the method, in the process of the invention,representing 2-norms>Is a regularization parameter, +.>Indicate->Original target image for training, +.>Is corresponding to the target object image->Is a one-dimensional sequence of light field intensities,/>Data amount in dataset for original object image, < >>For loss function->And the parameters are the optimized neural network model parameters.
Optionally, the loss function is expressed as:
in the method, in the process of the invention,indicate->Original target image for training, +.>Is corresponding to the target object image->Is a one-dimensional sequence of light field intensities,/>The total amount of data in the dataset is the original target image.
From the above technical solutions, the embodiments of the present application have the following advantages:
in an embodiment of the present application, an underwater ghost imaging system based on edge calculation is provided, including: comprising the following steps: the device comprises a light source, a light modulation device, a projection lens, a converging lens, a light intensity detector and an edge calculation module; the first end of the edge computing module is connected with the optical modulation device and is used for sending speckle to the optical modulation device; light source: for emitting light to an effective modulation region of the light modulation device; light modulation device: the device is used for modulating light rays through speckle and reflecting the modulated light rays, so that the reflected light rays pass through the projection lens along the optical axis of the projection lens and then are emitted to a target object in a water body; converging lens: for converging light reflected by the object; light intensity detector: the light intensity information acquisition module is used for acquiring the light intensity information of the converged light rays; the second end of the edge computing module is connected with the light intensity detector and is used for controlling the light intensity detector to acquire light intensity information and acquiring an image of the target object through a deep learning image reconstruction algorithm according to the light intensity information.
The underwater ghost imaging system based on edge calculation comprises a light source, a light modulation device, a projection lens, a converging lens, a light intensity detector and an edge calculation module, and is compact in structure, convenient to install and simple to operate, and an image of a target object is acquired through a deep learning image reconstruction algorithm by utilizing light intensity information acquired by the system. And the edge computing technology is adopted, the computing capability of the system is greatly improved by matching with the base station and the cloud computing platform, and the practicability is higher. In addition, due to orthogonality of the Fourier sine speckles, an effective light intensity sequence capable of reconstructing a target image can be obtained under the condition of being lower than Nyquist sampling, and the deep learning image reconstruction algorithm provided by the application is combined, so that the target image can be quickly reconstructed, noise interference in the image is removed, image details are enhanced, and image quality is improved. The method is beneficial to research and development of underwater imaging technology and deep learning technology. Therefore, the technical problems of large calculated amount, low imaging resolution and poor imaging quality in the existing underwater ghost imaging are solved.
Drawings
FIG. 1 is a block diagram of an edge-computation-based underwater ghosting imaging system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a deep learning image reconstruction method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a neural network model training process in the present application;
FIG. 4 is a schematic diagram showing the components of modules A, B, C, D, E, F, G, H, and I in FIG. 3;
fig. 5 is a schematic diagram of the composition of the dense block of fig. 4.
Reference numerals: 101. a light source; 102. a light modulation device; 103. a projection lens; 104. a converging lens; 105. the light intensity detector, 106, edge calculation module; 107. and (5) a base station.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, an underwater ghost imaging system based on edge calculation according to an embodiment of the present application includes: a light source 101, a light modulation device 102, a projection lens 103, a converging lens 104, a light intensity detector 105, an edge calculation module 106;
a first end of the edge calculation module 106 is connected to the optical modulation device 102 for sending speckle to the optical modulation device 102;
light source 101: for emitting light to an effective modulation region of the light modulation device 102;
light modulation device 102: the device is used for modulating light rays through speckle and reflecting the modulated light rays, so that the reflected light rays pass through the projection lens along the optical axis of the projection lens 103 and then are emitted to a target object in a water body;
converging lens 104: for converging light reflected by the object;
the light intensity detector 105: the light intensity information acquisition module is used for acquiring the light intensity information of the converged light rays;
the second end of the edge computing module 106 is connected with the light intensity detector 105, and is used for controlling the light intensity detector 105 to collect light intensity information, and obtaining an image of the target object through a deep learning image reconstruction algorithm according to the light intensity information.
The fourier sinusoidal speckle that modulates the light source is emitted by the edge calculation module, and modulated by the light modulation device 102 and the light source emitted by the light source 101. The light source is projected to a target object in the water body through the projection lens 103, is emitted by the target object, and reaches the light intensity detector 105 through the converging lens 104 to be collected, so that a light field intensity sequence is obtained. The edge calculation module comprises an image model obtained by training an image network model by using a light field intensity sequence sample, so that the image of a target object can be rapidly output by only inputting the obtained light field intensity sequence into the edge calculation module and calculating the light field intensity sequence by using the image model.
In a specific mounting manner, the light modulation device 102 is first fixed, the projection lens 103 is mounted on the right side of the light modulation device 102, and the optical axis of the projection lens 103 passes through the center of the effective modulation area of the light modulation device 102; then, a light source 101 is installed at the lower right side of the light modulation device 102, so that light emitted by the light source 101 can cover an effective modulation area of the light modulation device, and after central light of the light source 101 is reflected by the light modulation device 102, the reflected light can pass through the projection lens 103 along the optical axis of the projection lens 103; the projected light source is then directed at the target object and a converging lens 104 is mounted directly in front of the light intensity detector 105 such that the focal point of the converging lens 104 is centered in the effective detection area of the light intensity detector 105.
The arrangement of the positional relationship of each module in the system of the above-described installation mode is merely an example of the present embodiment, and those skilled in the art may also perform the arrangement according to actual situations, and is not limited herein.
The underwater ghost imaging system based on edge calculation comprises a light source, a light modulation device, a projection lens, a converging lens, a light intensity detector and an edge calculation module, and is compact in structure, convenient to install and simple to operate, and an image of a target object is acquired through a deep learning image reconstruction algorithm by utilizing light intensity information acquired by the system. And the edge computing technology is adopted, the computing capability of the system is greatly improved by matching with the base station and the cloud computing platform, and the practicability is higher. In addition, due to orthogonality of the Fourier sine speckles, an effective light intensity sequence capable of reconstructing a target image can be obtained under the condition of being lower than Nyquist sampling, and the deep learning image reconstruction algorithm provided by the application is combined, so that the target image can be quickly reconstructed, noise interference in the image is removed, image details are enhanced, and image quality is improved. The invention is beneficial to the research and development of the underwater imaging technology and the deep learning technology. Therefore, the technical problems of large calculated amount, low imaging resolution and poor imaging quality in underwater ghost imaging are solved.
In a specific embodiment, the edge-based underwater ghost imaging system of the present application further comprises: a cloud computing center; the cloud computing center is in communication connection with the edge computing module;
the cloud computing center: and acquiring the image of the target object through a deep learning image reconstruction algorithm according to the light intensity information when the pixel size of the image of the target object is larger than a preset size.
It should be noted that, according to experimental analysis of the applicant, when the pixel size of the processed image is greater than 256×256, the speed of generating the image by the edge computing module has no obvious advantage compared with the prior art, so in order to ensure the reliability of the underwater ghost imaging system based on edge computing, in consideration of the characteristic that the edge computing module has no obvious advantage on the large-size image processing, the cloud computing center is configured to generate the image of the target object by using the prior art, and therefore, a person skilled in the art can set the preset size according to the actual situation.
In a specific embodiment, the edge-based underwater ghost imaging system of the present application further comprises: the cloud storage platform is respectively in communication connection with the cloud computing center and the edge computing module;
the cloud storage platform is used for: and storing the image of the target object.
It should be noted that, in order to ensure the speed and reliability of the generated target image, the present embodiment further sets a cloud storage platform for storing the image of the target.
In a specific embodiment, the edge-based underwater ghost imaging system of the present application further comprises: the display terminal is in communication connection with the cloud storage platform;
the display terminal is used for: and receiving the image of the target object sent by the cloud storage platform and displaying the image of the target object.
In order to enable the user to intuitively and real-timely acquire the image of the target object and improve the practicability of the system, the embodiment also sets a display terminal to display the image of the target object, and the display terminal can be various, such as a mobile phone, a tablet personal computer, a PC and the like.
In a specific embodiment, the underwater ghost imaging system based on edge calculation is provided, wherein the converging lens is arranged right in front of the light intensity detector, so that the focus of the converging lens is positioned at the center of the effective detection area of the light intensity detector.
The embodiment of the underwater ghost imaging system based on edge calculation provided in the embodiment of the application is provided below, and the embodiment of the deep learning image reconstruction method provided in the embodiment of the application is provided below.
Referring to fig. 2, 3, 4, and 5, a deep learning image reconstruction method provided in an embodiment of the present application includes:
and 101, loading the speckle sequence onto a light modulation device to modulate light, and shooting the modulated light to a target object through a projection lens.
In this embodiment, the fourier sine speckle pattern is formedLoaded onto the light modulation device, the illumination beam is modulated by fourier sinusoidal speckle. The modulated light beam is projected onto the target object by the projection lens, the reflected light beam is converged to the light intensity detector by the converging lens, and the light field intensity value is +.>Recorded by the light intensity detector, is represented as follows:
in the aboveIs a target function, +.>,/>Is the total number of fourier sinusoidal speckles,>is the pixel coordinates. The pixel size of the fourier sinusoidal speckle is +.>。/>
corresponding to a speckle sequenceA one-dimensional light field intensity sequence can be obtained>:
And 102, detecting the light intensity of the light converged to the light intensity detector by the converging lens to obtain a light intensity sequence.
It should be noted that, the light is the light reflected by the target object in step 101, and the light is converged on the light intensity detector through the converging lens, and the light intensity detector collects the light intensity sequence of the light.
The embodiment performsAfter the training, the trained network model parameters +.>The training times can be set by those skilled in the art according to actual situations, and the training times are not limited herein.
taking outZhang Xiangsu the size is->As an original object image, for the +.>Original target image->Obtaining a one-dimensional light field intensity sequence by calculation>;
In the method, in the process of the invention,indicate->Individual light field intensity values,/->,/>Is the total number of measurements.
Note that the neural network model is expressed as:
the above representation uses hidden functionsIn one-dimensional light field intensity sequence->And output image +.>Mapping relation is established between->Representing neural network model parameters.
Wherein, the neural network model parameters are expressed as:
in the method, in the process of the invention,representing 2-norms>Is a regularization parameter, +.>Indicate->Original target image for training, +.>Is corresponding to the target object image->Is a one-dimensional light field of (2)Intensity sequence,/->Data amount in dataset for original object image, < >>For loss function->And the parameters are the optimized neural network model parameters.
Note that the loss function is expressed as:
in the method, in the process of the invention,indicate->Original target image for training, +.>Is corresponding to the target object image->Is a one-dimensional sequence of light field intensities,/>The total amount of data in the dataset is the original target image.
104, inputting the light intensity sequence into the trained neural network model, and outputting an image of the target object。
For a given underwater target, a one-dimensional light field intensity sequence is calculatedThen use the trained godHigh-resolution and high-quality target object images can be reconstructed through a network model>I.e. +.>。
In an optional embodiment, the deep learning image reconstruction method of the present application further includes: during the training of the neural network model, addA sheet noise level diagram, wherein->The pixel values of the tensor noise level map are all,/>,/>Is at->Randomly generated in the range.
It should be noted that, adding the noise level diagram can obtain the denoising neural network model, which is used for removing noise in the image and improving the resolution and quality of the image.
The deep learning image reconstruction method is applied to the underwater ghost imaging system based on edge calculation in the embodiment, and because the Fourier sine speckles have orthogonality, an effective light intensity sequence capable of reconstructing a target image can be obtained under the condition of being lower than Nyquist sampling, the deep learning image reconstruction algorithm can rapidly reconstruct the target image, remove noise interference in the image, enhance image details and improve image quality. The method is beneficial to research and development of underwater imaging technology and deep learning technology. Therefore, the technical problems of large calculated amount, low imaging resolution and poor imaging quality in underwater ghost imaging are solved.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing apparatus embodiment for the specific working process of the above-described method, which is not described in detail herein.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (8)
1. An edge-computing-based underwater ghost imaging system, comprising: the device comprises a light source, a light modulation device, a projection lens, a converging lens, a light intensity detector, an edge computing module and a cloud computing center;
the first end of the edge computing module is connected with the light modulation device and is used for sending speckle to the light modulation device, and the speckle is that: fourier sinusoidal speckle;
the light source: for emitting light towards an effective modulation region of the light modulation device;
the light modulation device: the device is used for modulating light rays through speckle and reflecting the modulated light rays, so that the reflected light rays pass through the projection lens along the optical axis of the projection lens and then are emitted to a target object in a water body;
the converging lens: for converging light reflected by the object;
the light intensity detector: the light intensity information acquisition module is used for acquiring the light intensity information of the converged light rays;
the second end of the edge computing module is connected with the light intensity detector and is used for controlling the light intensity detector to acquire light intensity information and acquiring an image of a target object through a deep learning image reconstruction algorithm according to the light intensity information;
the cloud computing center is in communication connection with the edge computing module;
the cloud computing center: and acquiring the image of the target object through a deep learning image reconstruction algorithm according to the light intensity information when the pixel size of the image of the target object is larger than a preset size.
2. An edge-computing-based underwater ghost imaging system as in claim 1, further comprising: a cloud storage platform; the cloud storage platform is respectively in communication connection with a cloud computing center and the edge computing module;
the cloud storage platform is used for: and storing the image of the target object.
3. An edge-computing-based underwater ghost imaging system as in claim 2, further comprising: the display terminal is in communication connection with the cloud storage platform;
the display terminal is used for: and receiving the image of the target object sent by the cloud storage platform and displaying the image of the target object.
4. An edge-computing-based underwater ghost imaging system as in claim 1, wherein the converging lens is mounted directly in front of the light intensity detector such that the focal point of the converging lens is centered in the effective detection area of the light intensity detector.
5. A deep learning image reconstruction method applied to the edge-calculation-based underwater ghost imaging system as claimed in any one of claims 1 to 4, comprising:
loading a speckle sequence onto a light modulation device to modulate light, and shooting the modulated light to a target object through a projection lens, wherein the speckle is as follows: fourier sinusoidal speckle;
detecting the light intensity of the light converged to the light intensity detector by the converging lens to obtain a light intensity sequence;
by means of data setsTraining the neural network model, and adding +.>A sheet noise level diagram, wherein->The pixel values of the tensor noise level map are all +.>,/>,/>Is at->Randomly generating in a range, and optimizing a loss function by using an ADAM function until training reaches preset times to obtain trained neural network model parameters;
inputting the light intensity sequence into a trained neural network model, and outputting an image of a target object;
Wherein the neural network model is expressed as:
the above representation uses hidden functionsIn one-dimensional light field intensity sequence->And outputting an image of the object +.>Mapping relation is established between->Representing neural network model parameters, < >>Indicate->Original target image for training, +.>Is corresponding to the target object image->Is a one-dimensional sequence of light field intensities,/>Is a one-dimensional sequence of light field intensities.
6. The deep learning image reconstruction method of claim 5, wherein the datasetThe manufacturing process of (1) is as follows:
taking outZhang Xiangsu the size is->As an original target image, for the firstOriginal target image->Obtaining a one-dimensional light field intensity sequence by calculation>;
7. The deep-learning image reconstruction method according to claim 5, wherein the neural network model parameters are expressed as:
in the method, in the process of the invention,representing 2-norms>Is a regularization parameter, +.>Indicate->Original target image for training, +.>Is corresponding to the target object image->Is a one-dimensional sequence of light field intensities,/>Data amount in dataset for original object image, < >>For loss function->And the parameters are the optimized neural network model parameters.
8. The deep learning image reconstruction method according to claim 5, wherein the loss function is expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110594841.XA CN113192154B (en) | 2021-05-28 | 2021-05-28 | Underwater ghost imaging system based on edge calculation and deep learning image reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110594841.XA CN113192154B (en) | 2021-05-28 | 2021-05-28 | Underwater ghost imaging system based on edge calculation and deep learning image reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113192154A CN113192154A (en) | 2021-07-30 |
CN113192154B true CN113192154B (en) | 2023-05-23 |
Family
ID=76986314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110594841.XA Active CN113192154B (en) | 2021-05-28 | 2021-05-28 | Underwater ghost imaging system based on edge calculation and deep learning image reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113192154B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986118A (en) * | 2020-08-31 | 2020-11-24 | 广东工业大学 | Underwater calculation ghost imaging image denoising method and system with minimized weighted nuclear norm |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105807289B (en) * | 2016-05-04 | 2017-12-15 | 西安交通大学 | Supercomputing relevance imaging system and imaging method based on preset modulated light source |
US11315221B2 (en) * | 2019-04-01 | 2022-04-26 | Canon Medical Systems Corporation | Apparatus and method for image reconstruction using feature-aware deep learning |
CN111833265A (en) * | 2020-06-15 | 2020-10-27 | 北京邮电大学 | Ghost imaging image recovery scheme based on group sparse cyclic modulation |
-
2021
- 2021-05-28 CN CN202110594841.XA patent/CN113192154B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986118A (en) * | 2020-08-31 | 2020-11-24 | 广东工业大学 | Underwater calculation ghost imaging image denoising method and system with minimized weighted nuclear norm |
Also Published As
Publication number | Publication date |
---|---|
CN113192154A (en) | 2021-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10608002B2 (en) | Method and system for object reconstruction | |
JP6107537B2 (en) | Imaging system and image processing method thereof, image processing apparatus and image processing method thereof, and program | |
Reynolds et al. | Capturing time-of-flight data with confidence | |
CN108957514B (en) | A kind of nuclear radiation detection method | |
CN108895985B (en) | Object positioning method based on single-pixel detector | |
WO2013172676A1 (en) | Method and apparatus for providing panorama image data | |
US20130335535A1 (en) | Digital 3d camera using periodic illumination | |
CN108271410A (en) | Imaging system and the method using the imaging system | |
CN105580052A (en) | Estimation of food volume and carbs | |
CN112633181B (en) | Data processing method, system, device, equipment and medium | |
CN111047650B (en) | Parameter calibration method for time-of-flight camera | |
CN117897720A (en) | Denoising depth data for low signal pixels | |
JP7046745B2 (en) | Machine learning device, diagnostic imaging support device, machine learning method and diagnostic imaging support method | |
CN113192154B (en) | Underwater ghost imaging system based on edge calculation and deep learning image reconstruction method | |
JP6898150B2 (en) | Pore detection method and pore detection device | |
CN116823694B (en) | Infrared and visible light image fusion method and system based on multi-focus information integration | |
CN109804229B (en) | Electromagnetic wave phase amplitude generation device, electromagnetic wave phase amplitude generation method, and non-transitory recording medium storing electromagnetic wave phase amplitude generation program | |
CN107421439B (en) | A kind of no imageable target conspicuousness detection and coordinate tracking system and method | |
JP3793053B2 (en) | Radiation image processing apparatus, image processing system, radiation image processing method, storage medium, and program | |
US11734834B2 (en) | Systems and methods for detecting movement of at least one non-line-of-sight object | |
CN108981782B (en) | Method for realizing calculation correlation imaging by using mobile phone | |
CN114859377B (en) | Method and equipment for capturing single-pixel imaging of moving target in real time | |
WO2022250905A1 (en) | Specular reflection reduction in endoscope visualization | |
CN114627520A (en) | Living body detection model training method, system, equipment and storage medium | |
CN111445507A (en) | Data processing method for non-visual field imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |