CN110334804B - All-optical depth diffraction neural network system and method based on spatial partially coherent light - Google Patents

All-optical depth diffraction neural network system and method based on spatial partially coherent light Download PDF

Info

Publication number
CN110334804B
CN110334804B CN201910538817.7A CN201910538817A CN110334804B CN 110334804 B CN110334804 B CN 110334804B CN 201910538817 A CN201910538817 A CN 201910538817A CN 110334804 B CN110334804 B CN 110334804B
Authority
CN
China
Prior art keywords
optical
neural network
signal
coherent
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910538817.7A
Other languages
Chinese (zh)
Other versions
CN110334804A (en
Inventor
谢浩
林星
周天贶
严涛
吴嘉敏
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910538817.7A priority Critical patent/CN110334804B/en
Publication of CN110334804A publication Critical patent/CN110334804A/en
Application granted granted Critical
Publication of CN110334804B publication Critical patent/CN110334804B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/01Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour 
    • G02F1/0136Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour  for the control of polarisation, e.g. state of polarisation [SOP] control, polarisation scrambling, TE-TM mode conversion or separation
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/35Non-linear optics
    • G02F1/3515All-optical modulation, gating, switching, e.g. control of a light beam by another light beam
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/35Non-linear optics
    • G02F1/355Non-linear optics characterised by the materials used
    • G02F1/3551Crystals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/067Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/067Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
    • G06N3/0675Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means using electro-optical, acousto-optical or opto-electronic means

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Nonlinear Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Neurology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Chemical & Material Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an all-optical deep diffraction neural network system and method based on spatial partially coherent light, which comprises the following steps: the conversion module is used for converting the input spatial partially coherent optical signal into a coherent optical signal; the all-optical depth diffraction neural network module is used for converting, extracting and compressing the coherent optical signals; and the information acquisition module is used for receiving the output signal of the all-optical depth diffraction neural network module and generating a processing result of the spatial partially coherent optical signal according to the output signal. The system can expand the application field of the all-optical deep diffraction neural network, so that the all-optical deep diffraction neural network can better complete more complex machine learning tasks, and particularly can complete natural scene image recognition processing and calculation tasks.

Description

All-optical depth diffraction neural network system and method based on spatial partially coherent light
Technical Field
The invention relates to the technical field of photoelectric calculation and machine learning, in particular to an all-optical deep diffraction neural network system and method based on spatial partially coherent light.
Background
Deep learning uses multi-layered artificial neural networks implemented in computers to learn information in data digitally and can perform high-level tasks with performance comparable to or even superior to that of humans. Recently, examples of deep learning that have made significant progress in the field of machine learning include medical image analysis, speech recognition, image classification, and the like. The traditional deep learning network is realized based on a circuit, the running speed of the traditional deep learning network is limited by electric devices such as a CPU (central processing unit), a GPU (graphic processing unit) and the like in the circuit, and the traditional deep learning network has the problems of low running speed, low operation efficiency, huge energy consumption and the like. Currently, there is an all-optical diffraction deep neural network that uses passive optical elements for all-optical machine learning. The architecture can execute some functions based on the neural network at the speed of light, and has obvious advantages in the aspects of parallel computing capability, power and efficiency.
The full-optical diffraction deep neural network needs to calculate coherent light, but most light sources in a natural scene are spatial partially coherent light, so that the current full-optical diffraction deep neural network cannot directly process partially coherent signals in the natural scene.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
To this end, it is an object of the present invention to propose an all-optical depth-diffractive neural network system based on spatially partially coherent light, which can enable an all-optical depth-diffractive neural network to directly process spatially partially coherent light signals.
The invention also aims to provide an all-optical depth diffraction neural network method based on the spatial partially coherent light.
In order to achieve the above object, an embodiment of an aspect of the present invention provides an all-optical depth diffraction neural network system based on spatially partially coherent light, including: the conversion module is used for converting the input spatial partially coherent optical signal into a coherent optical signal; the all-optical depth diffraction neural network module is used for transforming, extracting and compressing the coherent optical signals; and the information acquisition module is used for receiving the output signal of the all-optical depth diffraction neural network module and generating a processing result of the spatial partially coherent optical signal according to the output signal.
According to the all-optical deep diffraction neural network system based on the spatial partially coherent light, disclosed by the embodiment of the invention, the function based on the neural network is executed on a natural scene by using the optical element at the light speed, so that a mode for effectively and quickly realizing a machine learning task is created; the method can economically and efficiently realize a large-scale neural network for the partially coherent light in an extensible and low-power-consumption mode, and has the potential of realizing various complex applications, so that the all-optical deep diffraction neural network can better complete more complex machine learning tasks, and particularly can complete natural scene image recognition processing and calculation tasks.
In addition, the all-optical depth diffraction neural network system based on the spatial partially coherent light according to the above embodiment of the present invention may further have the following additional technical features:
further, in one embodiment of the present invention, the conversion module includes: a lens for coupling an input spatially partially coherent optical signal to the optical conversion component; optical conversion means for converting the spatially partially coherent optical signal into the coherent optical signal; a coherent light source for providing energy to the coherent light signal.
Further, in an embodiment of the present invention, the conversion module is further configured to encode the optical conversion component with the coherent recording light and the spatially partially coherent optical signal, and convert the encoding of the optical conversion component into a coherent optical spatial distribution with the readout light.
Alternatively, in one embodiment of the present invention, the light conversion member may be a ferroelectric thin film member or a photorefractive member.
Further, in one embodiment of the present invention, the all-optical depth diffractive neural network module includes: an optical intensity modulation layer for intensity modulating the spatially propagated coherent optical signal with varying absorbance of an optical propagation medium; an optical phase modulation layer for phase modulating the spatially propagated coherent optical signal by changing a refractive index of an optical propagation medium; and the nonlinear modulation layer is used for carrying out nonlinear modulation on the phase and the intensity of the coherent optical signal which propagates in space by utilizing a nonlinear effect.
Further, in an embodiment of the present invention, wherein the optical intensity modulation layer and the optical phase modulation layer are manufactured by a 3D printing or photolithography technique, and parameters of the optical intensity modulation layer and the optical phase modulation layer are optimized by a deep learning method.
Alternatively, in one embodiment of the present invention, the nonlinear modulation layer may be made of SBN crystal.
Further, in one embodiment of the present invention, the nonlinear effect includes an electro-optic effect and a photorefractive effect of the crystal.
Further, in an embodiment of the present invention, the information collecting module includes: a lens for coupling the output signal of the all-optical depth diffraction neural network module to a detector; a detector for converting the output signal into an electrical signal.
In order to achieve the above object, an embodiment of another aspect of the present invention provides a method for a full-optical depth diffraction neural network based on spatially partially coherent light, including: converting an input spatial partially coherent optical signal into a coherent optical signal; transforming, extracting and compressing the coherent optical signal to obtain an output signal; and receiving the output signal, and generating a processing result of the spatial partially coherent optical signal according to the output signal.
According to the all-optical deep diffraction neural network method based on the spatial partially coherent light, disclosed by the embodiment of the invention, the function based on the neural network is executed on a natural scene by using the optical element at the light speed, so that a mode for effectively and quickly realizing a machine learning task is created; the method can economically and efficiently realize a large-scale neural network for the partially coherent light in an extensible and low-power-consumption mode, and has the potential of realizing various complex applications, so that the all-optical deep diffraction neural network can better complete more complex machine learning tasks, and particularly can complete natural scene image recognition processing and calculation tasks.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic structural diagram of an all-optical depth diffractive neural network system based on spatial partially coherent light according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an all-optical depth diffractive neural network system based on spatial partially coherent light according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a conversion module of a full-gloss depth diffraction neural network based on spatially partially coherent light according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an all-optical depth diffractive neural network system based on spatially partially coherent light according to an embodiment of the present invention;
FIG. 5 is a flow chart of a design method of an all-optical depth diffraction neural network based on spatial partially coherent light according to an embodiment of the present invention;
fig. 6 is a flowchart of a method of the all-optical depth diffraction neural network based on the spatial partially coherent light according to the embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes an all-optical depth diffractive neural network system and method based on spatially partially coherent light according to an embodiment of the present invention with reference to the drawings, and first, an all-optical depth diffractive neural network system based on spatially partially coherent light according to an embodiment of the present invention will be described with reference to the drawings.
Fig. 1 is a schematic structural diagram of an all-optical depth diffractive neural network system based on spatially partially coherent light according to an embodiment of the present invention.
As shown in fig. 1, the all-optical depth diffraction neural network system 100 based on the spatial partially coherent light includes: a conversion module 10, an all-optical depth diffraction neural network module 20 and an information acquisition module 30.
The conversion module 10 is configured to convert an input spatial partially coherent optical signal into a coherent optical signal. The all-optical depth diffraction neural network module 20 is used for transforming, extracting and compressing coherent optical signals. The information acquisition module 30 is configured to receive an output signal of the all-optical depth diffraction neural network module, and generate a processing result of the spatial partially coherent optical signal according to the output signal. The system 100 of the embodiment of the invention can expand the application field of the all-optical deep diffraction neural network, so that the all-optical deep diffraction neural network can better complete more complex machine learning tasks, and especially can complete natural scene image recognition processing and calculation tasks.
It is to be understood that the conversion module 10 is configured to convert an input spatial partially coherent optical signal into a coherent optical signal; the all-optical depth diffraction neural network module 20 is used for performing signal extraction transformation, extraction and compression on the coherent optical signals; the information acquisition module 30 is configured to receive an output signal of the all-optical deep learning network module 20, and generate an information processing result. The system 100 of the embodiment of the invention aims to expand the application field of the all-optical depth diffraction neural network system, so that the all-optical depth diffraction neural network system can directly process a coherent light scene of a more common space part in the nature, and can be particularly used as a rapid shooting and compression detection device for a high-speed change scene.
The full-optical depth diffractive neural network system 100 based on the spatially partially coherent light will be explained in detail below.
Further, in one embodiment of the present invention, as shown in fig. 2, the conversion module 10 includes: a lens 11, a light conversion member 12 and a coherent light source 13.
Wherein the lens 11 is used for coupling the input spatial partially coherent optical signal to the optical conversion component 12. The optical conversion component 12 is configured to convert the spatial partially coherent optical signal into a coherent optical signal. The coherent light source 13 is used to provide energy for the coherent light signal.
Specifically, the lens 11 functions to couple the input spatial partially coherent light to the light conversion member 12, and may be omitted under certain conditions. The optical conversion component 12 functions to convert the spatially partially coherent light into coherent light carrying input information, and the required amount of coherent light energy comes from the coherent light source 13.
Further, in an embodiment of the present invention, the conversion module 10 is further configured to encode the optical conversion component 12 with the coherent recording light and the spatial partially coherent light signal, and convert the encoding of the optical conversion component 12 into a coherent optical spatial distribution with the readout light.
Specifically, the conversion module 10 is composed of an optical conversion member encoded with coherent recording light and spatially partially coherent light, and readout light for converting the encoding of the optical conversion member into coherent optical spatial distribution.
Alternatively, in one embodiment of the invention, the light conversion component may be a ferroelectric thin film component or a photorefractive component, such as KNbO3And (4) crystals.
Wherein, KNbO3The crystal is a photorefractive crystal, and the embodiment of the invention utilizes KNbO3The photorefractive effect of the crystal converts spatially partially coherent light into coherent light for optical information. The coherent light source 13 may use an oblique incidence 364nm recording light and a 532nm readout light. The structure is shown in fig. 3. Filters can be added between the natural scene and the crystal and behind the crystal to realize the selection of optical bands.
Further, in one embodiment of the present invention, as shown in fig. 2, the all-optical depth diffractive neural network module 20 includes: an optical intensity modulation layer 21, an optical phase modulation layer 22 and a nonlinear modulation layer 23.
The optical intensity modulation layer 21 is used for intensity-modulating the spatially-propagated coherent optical signal by changing the absorption of the optical propagation medium. The optical phase modulation layer 22 is used to phase modulate a spatially propagating coherent optical signal by changing the refractive index of the optical propagation medium. The nonlinear modulation layer 23 is used to nonlinearly modulate the phase and intensity of the spatially propagating coherent optical signal using nonlinear effects.
Specifically, the all-optical depth diffractive neural network module 20 includes zero or more cascaded optical intensity modulation layers 21, optical phase modulation layers 22, and nonlinear modulation layers 23. The optical intensity modulation layer 21 modulates the intensity of a spatially propagated optical signal by changing the absorption rate of the optical propagation medium, the optical phase modulation layer 22 modulates the phase of the spatially propagated optical signal by changing the refractive index of the optical propagation medium, and the nonlinear modulation layer 23 modulates the phase and the intensity of the spatially propagated optical signal nonlinearly by nonlinear effects such as the electro-optic effect and the photorefractive effect of a crystal.
Alternatively, in an embodiment of the present invention, wherein the optical intensity modulation layer 21 and the optical phase modulation layer 22 are manufactured by 3D printing or photolithography, and the parameters of the optical intensity modulation layer 21 and the optical phase modulation layer 22 are optimized by a deep learning method. The nonlinear modulation layer 23 may be an SBN crystal.
Specifically, the optical intensity modulation layer 21 and the optical phase modulation layer 22 may be manufactured using a technique such as 3D printing or photolithography, and parameters are optimized by a deep learning method. The nonlinear modulation layer 23 may be a photorefractive crystal such as an SBN crystal, and performs nonlinear modulation on optical information by using an electro-optical effect and a photorefractive effect of the photorefractive crystal.
Further, in an embodiment of the present invention, as shown in fig. 2, the information collecting module 30 includes: a lens 31 and a detector 32.
Wherein the lens 31 is used for coupling the output signal of the all-optical depth diffractive neural network module 20 to the detector 32. The detector 32 is used to convert the output signal into an electrical signal.
It is understood that the information acquisition module 30 includes a lens 31 and a detector 32. The lens 31 is used to couple the signal output from the network to the detector 32, and may be omitted under certain conditions. The detector 32 converts the optical signal into an electrical signal.
The operation of the all-optical depth diffractive neural network system 100 based on the spatially partially coherent light will be further described with reference to fig. 4.
As shown in fig. 4, the spatial partially coherent light signal is irradiated onto the light conversion component 12 through the lens 11, and the spatial partially coherent light is converted into coherent light carrying information by the cooperation of the coherent light source 13. Coherent light is input into the all-optical depth diffraction neural network module 20 to be diffracted, passes through the optical intensity modulation layer 21, the optical phase modulation layer 22 and the nonlinear modulation layer 23, and performs image processing work such as feature extraction, information compression and the like through modulation of spatial light field distribution. The output optical signal is coupled through a lens 31 and received by a detector 32.
In summary, the system 100 of the embodiment of the present invention includes a conversion module 10, an all-optical depth diffraction neural network module 20, and an information acquisition module 30. The conversion module 10 is used for converting an input signal into optical information. The all-optical depth diffractive neural network module 20 is formed by alternately cascading a plurality of modules for linearly or nonlinearly modulating the intensity and phase of light, and is used for performing signal conversion, extraction and compression on a coherent light signal with spatial partially coherent light information. The information acquisition module 30 is composed of a lens and a detector, and is configured to receive an output signal of the all-optical deep diffraction neural network module, couple the output signal to the detector by using the lens for acquisition, and generate an information processing result.
The process of establishing the all-optical depth diffractive neural network system 100 based on spatially partially coherent light according to the present invention will be described in detail with reference to specific embodiments. Various parameters of the system 100 of the embodiment of the invention are obtained by establishing a simulation model and optimizing the simulation model by using a deep learning method. Specifically, fig. 5 is a flowchart of a method for implementing a machine learning function according to an embodiment of the present invention, and the specific process is as follows:
a) and establishing a numerical simulation model of the all-optical depth diffraction neural network optical system based on the spatial partially coherent light.
In the conversion module, the system of the embodiment of the invention inputs the spatial partially coherent light as the signal light of the system to the surface of the optical conversion component. After passing through the optical conversion part, the output coherent signal light has the following relationship Esout(x,y)=f(Isin(x, y), x, y), wherein Isin(x, y) is the intensity of incoherent light input to the light-converting member, Esin(x, y) is the coherent light field output at that point. f (I, x, y) is the corresponding relation between the output light field of the light conversion component at the selected position and the input light intensity of the point, and is determined according to the properties of the photorefractive material and the parameters of the recording light and the reading light. Output coherent signal light Esout(x, y) becomes the input of the all-optical depth diffraction neural network module.
In a plenoptic depth diffractive neural network module, each diffractive layer can be expressed as a complex transmittance function t (x, y) of light, with the output light field through the diffractive layer being the product of the input light field and the transmittance function. The output optical field of the nonlinear layer is a nonlinear transformation of the input optical field, and the specific form is determined by the nonlinear material (such as SBN photorefractive crystal) itself. The propagation of light between the different layers is described using the fresnel propagation formula. The output of the last layer of diffraction network is collected by the optical information collecting and processing module after being transmitted.
In the information acquisition module, the detector receives the intensity or phase information of the output light field, and stores or further processes the intensity or phase information.
b) And optimizing the structure and system parameters of the all-optical depth diffraction neural network system based on the spatial partially coherent light by using a deep learning method.
And establishing a deep learning network according to the simulation model, taking the image to be processed as input, taking the correct result of the target task as a true value, and constructing a proper training set, a proper verification set and a proper test set. And iteratively adjusting parameters of the phase modulation layer by using loss functions such as minimum mean square error or cross entropy and the like and algorithms such as error back propagation and the like, and obtaining an optimal optimization result by adjusting super parameters such as distance, transmittance and phase distribution among all diffraction layers.
c) And optimizing each obtained parameter according to the simulation.
The function of the all-optical depth diffraction neural network based on the spatial partially coherent light can be realized by using a phase modulation layer physically manufactured by 3D printing or photoetching technology and the like and correctly installing a hardware system according to a simulation model.
The system of the embodiment of the invention is different from the traditional photoelectric hybrid neural network. The system of the embodiment of the invention completely avoids signal delay caused by electronic devices by jointly adopting all-optical space partially coherent light-coherent light conversion and an all-optical deep diffraction neural network, so that all the processes of acquisition, calculation, signal extraction and compression can run at the optical speed, thereby greatly improving the acquisition and processing efficiency of signals.
Further, an embodiment of the present invention further provides a method for designing an all-optical depth diffraction neural network based on spatial partially coherent light, including:
establishing a numerical simulation model of an optical element, acquiring a training set and a test set, training the numerical simulation model through a deep learning and error back propagation algorithm according to the training set and the test set, optimizing the structure of the all-optical deep diffraction neural network based on the spatial partially coherent light in the training process, and adjusting the parameters of the all-optical deep diffraction neural network based on the spatial partially coherent light;
coherent light coding is carried out on incoherent light by utilizing a space partially coherent light-coherent light conversion module, an actual all-optical depth diffraction neural network system based on space partially coherent light is built, and a target task is executed by utilizing the built all-optical depth diffraction neural network system based on space partially coherent light.
To sum up, the all-optical deep-diffraction neural network system based on the spatial partially coherent light according to the embodiment of the present invention effectively solves the limitation of the existing machine learning task in the all-optical artificial neural network, realizes high-speed conversion of the partially coherent light by using the optical conversion component, realizes high-speed extraction and processing of optical signals by using the all-optical deep-diffraction neural network, and finally realizes acquisition of compressed signals by using the information acquisition module, thereby being capable of completing more complex information processing tasks.
According to the all-optical deep diffraction neural network system based on the spatial partially coherent light, which is provided by the embodiment of the invention, the function based on the neural network is executed on a natural scene by using the optical element at the light speed, so that a mode for effectively and quickly realizing a machine learning task is created; the method can economically and efficiently realize a large-scale neural network for the partially coherent light in an extensible and low-power-consumption mode, and has the potential of realizing various complex applications, so that the all-optical deep diffraction neural network can better complete more complex machine learning tasks, and particularly can complete natural scene image recognition processing and calculation tasks.
The method of the full-optical depth diffraction neural network based on the spatial partially coherent light, which is provided by the embodiment of the invention, is described next with reference to the attached drawings.
Fig. 6 is a flowchart of an all-optical depth diffraction neural network method based on spatial partially coherent light according to an embodiment of the present invention.
As shown in fig. 6, the method for the all-optical depth diffraction neural network based on the spatial partially coherent light includes the following steps:
in step S601, the input spatial partially coherent optical signal is converted into a coherent optical signal.
In step S602, the coherent optical signal is transformed, extracted, and compressed to obtain an output signal.
In step S603, the output signal is received, and a processing result of the spatial partially coherent optical signal is generated from the output signal.
It should be noted that the foregoing explanation of the embodiment of the all-optical depth diffraction neural network system based on the spatial partially coherent light is also applicable to the all-optical depth diffraction neural network method based on the spatial partially coherent light of the embodiment, and details are not described here.
According to the all-optical depth diffraction neural network method based on the spatial partially coherent light, which is provided by the embodiment of the invention, the function based on the neural network is executed on a natural scene by using the optical element at the light speed, so that a mode for effectively and quickly realizing a machine learning task is created; the method can economically and efficiently realize a large-scale neural network for the partially coherent light in an extensible and low-power-consumption mode, and has the potential of realizing various complex applications, so that the all-optical deep diffraction neural network can better complete more complex machine learning tasks, and particularly can complete natural scene image recognition processing and calculation tasks.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (7)

1. An all-optical depth diffraction neural network system based on spatial partially coherent light, comprising:
a conversion module for converting an input spatial partially coherent optical signal into a coherent optical signal, the conversion module comprising: a lens for coupling an input spatially partially coherent optical signal to the optical conversion component; optical conversion means for converting the spatially partially coherent optical signal into the coherent optical signal; a coherent light source for providing energy to the coherent light signal; the conversion module is further used for encoding the optical conversion component by using coherent recording light and the spatial partially coherent light signal, and converting the encoding of the optical conversion component into coherent optical spatial distribution by using readout light;
an all-optical depth diffractive neural network module, configured to transform, extract, and compress the coherent optical signal, wherein the all-optical depth diffractive neural network module includes: an optical intensity modulation layer for intensity modulating the spatially propagated coherent optical signal with varying absorbance of an optical propagation medium; an optical phase modulation layer for phase modulating the spatially propagated coherent optical signal by changing a refractive index of an optical propagation medium; a nonlinear modulation layer for nonlinear modulating the phase and intensity of the spatially propagated coherent optical signal using nonlinear effects; and
and the information acquisition module is used for receiving the output signal of the all-optical depth diffraction neural network module and generating a processing result of the spatial partially coherent optical signal according to the output signal.
2. The system of claim 1, wherein the light conversion component is a ferroelectric thin film component or a photorefractive component.
3. The system of claim 1, wherein,
the optical intensity modulation layer and the optical phase modulation layer are manufactured by a 3D printing or photolithography technique, and parameters of the optical intensity modulation layer and the optical phase modulation layer are optimized by a deep learning method.
4. The system of claim 1, wherein the nonlinear modulation layer is an SBN crystal.
5. The system of claim 3, wherein the nonlinear effect comprises an electro-optic effect and a photorefractive effect of the crystal.
6. The system of claim 1, wherein the information collection module comprises:
a lens for coupling the output signal of the all-optical depth diffraction neural network module to a detector;
a detector for converting the output signal into an electrical signal.
7. An all-optical depth diffraction neural network method based on spatial partially coherent light is characterized by comprising the following steps:
converting an input spatial partially coherent optical signal into a coherent optical signal, further comprising: coupling an input spatially partially coherent optical signal to an optical conversion component; converting the spatial partially coherent optical signal into the coherent optical signal; providing energy to the coherent optical signal; further comprising encoding the optical conversion component with coherent recording light and the spatially partially coherent optical signal and converting the encoding of the optical conversion component into a coherent optical spatial distribution with readout light;
the coherent optical signal is transformed, extracted and compressed to obtain an output signal, wherein the all-optical depth diffraction neural network module comprises: an optical intensity modulation layer for intensity modulating the spatially propagated coherent optical signal with varying absorbance of an optical propagation medium; an optical phase modulation layer for phase modulating the spatially propagated coherent optical signal by changing a refractive index of an optical propagation medium; a nonlinear modulation layer for nonlinear modulating the phase and intensity of the spatially propagated coherent optical signal using nonlinear effects; and
and receiving the output signal, and generating a processing result of the spatial partially coherent optical signal according to the output signal.
CN201910538817.7A 2019-06-20 2019-06-20 All-optical depth diffraction neural network system and method based on spatial partially coherent light Expired - Fee Related CN110334804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910538817.7A CN110334804B (en) 2019-06-20 2019-06-20 All-optical depth diffraction neural network system and method based on spatial partially coherent light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910538817.7A CN110334804B (en) 2019-06-20 2019-06-20 All-optical depth diffraction neural network system and method based on spatial partially coherent light

Publications (2)

Publication Number Publication Date
CN110334804A CN110334804A (en) 2019-10-15
CN110334804B true CN110334804B (en) 2021-09-07

Family

ID=68142325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910538817.7A Expired - Fee Related CN110334804B (en) 2019-06-20 2019-06-20 All-optical depth diffraction neural network system and method based on spatial partially coherent light

Country Status (1)

Country Link
CN (1) CN110334804B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461300B (en) * 2020-03-30 2022-10-14 北京航空航天大学 Optical residual depth network construction method
CN111582468B (en) * 2020-04-02 2022-08-09 清华大学 Photoelectric hybrid intelligent data generation and calculation system and method
CN111582435B (en) * 2020-04-02 2023-04-18 清华大学 Diffraction depth neural network system based on residual error network
CN111683304B (en) * 2020-05-13 2021-12-14 中国科学院西安光学精密机械研究所 All-optical diffraction neural network and system realized on optical waveguide and/or optical chip
CN112101514B (en) * 2020-07-27 2022-06-07 北京航空航天大学 Diffraction neural network adopting pyramid structure diffraction layer for light supplement and implementation method
CN112418403B (en) * 2020-11-25 2022-06-28 清华大学 Optical diffraction computing processor based on optical diffraction principle and programmable device
CN113641210B (en) * 2021-10-12 2022-03-18 清华大学 Optoelectronic integrated circuit for message compression in message hash algorithm
CN115021826B (en) * 2022-04-29 2024-04-16 清华大学 Intelligent coding and decoding computing system and method for optical computing communication
CN116310719B (en) * 2023-02-10 2024-04-19 中国人民解放军军事科学院国防科技创新研究院 Time-frequency domain-based optical diffraction complex model training method and image processing method
CN117521746B (en) * 2024-01-04 2024-03-26 武汉大学 Quantized optical diffraction neural network system and training method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104977154A (en) * 2015-06-26 2015-10-14 清华大学 Defect classification method of spatial light modulator with sub pixel structures
JP2018205864A (en) * 2017-05-31 2018-12-27 日本電信電話株式会社 Optical neural network learning device
CN109211122A (en) * 2018-10-30 2019-01-15 清华大学 Ultraprecise displacement measurement system and method based on optical neural network
CN109477938A (en) * 2016-06-02 2019-03-15 麻省理工学院 Device and method for optical neural network
CN109871871A (en) * 2019-01-16 2019-06-11 南方科技大学 Image-recognizing method, device and electronic equipment based on optical neural network structure

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108365953B (en) * 2018-02-06 2020-06-23 中南大学 Adaptive differential phase shift quantum key distribution system based on deep neural network and implementation method thereof
CN109784486B (en) * 2018-12-26 2021-04-23 中国科学院计算技术研究所 Optical neural network processor and training method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104977154A (en) * 2015-06-26 2015-10-14 清华大学 Defect classification method of spatial light modulator with sub pixel structures
CN109477938A (en) * 2016-06-02 2019-03-15 麻省理工学院 Device and method for optical neural network
JP2018205864A (en) * 2017-05-31 2018-12-27 日本電信電話株式会社 Optical neural network learning device
CN109211122A (en) * 2018-10-30 2019-01-15 清华大学 Ultraprecise displacement measurement system and method based on optical neural network
CN109871871A (en) * 2019-01-16 2019-06-11 南方科技大学 Image-recognizing method, device and electronic equipment based on optical neural network structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
All-optical machine learning using diffractive deep neural networks;Xing Lin et al.;《OPTICAL COMPUTING》;20180907;第361卷(第6406期);说明书第1004-1005页、图1 *

Also Published As

Publication number Publication date
CN110334804A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110334804B (en) All-optical depth diffraction neural network system and method based on spatial partially coherent light
CN111582435B (en) Diffraction depth neural network system based on residual error network
Li et al. Class-specific differential detection in diffractive optical neural networks improves inference accuracy
CN110929864B (en) Optical diffraction neural network on-line training method and system
CN110309916B (en) Multi-stage space-frequency domain modulation nonlinear all-optical deep learning system and method
Xu et al. A multichannel optical computing architecture for advanced machine vision
CN104407506A (en) Compressive sensing theory-based digital holographic imaging device and imaging method
CN111582468B (en) Photoelectric hybrid intelligent data generation and calculation system and method
CN113033796A (en) Image identification method of all-optical nonlinear diffraction neural network
CN113780258A (en) Intelligent depth classification method and device for photoelectric calculation light field
CN114266702B (en) High-speed super-resolution imaging method and device based on compressed sensing and depth optics
Yang et al. High-fidelity image reconstruction for compressed ultrafast photography via an augmented-Lagrangian and deep-learning hybrid algorithm
Zhan et al. Diffractive deep neural network based adaptive optics scheme for vortex beam in oceanic turbulence
CN114519403B (en) Optical diagram neural classification network and method based on-chip diffraction neural network
Liu et al. Investigating deep optics model representation in affecting resolved all-in-focus image quality and depth estimation fidelity
CN204360096U (en) Based on the digital hologram imaging device of compressed sensing theory
CN111652372B (en) Wavefront restoration method and system based on diffractive optical neural network
CN114037069B (en) Neural network computing unit based on diffraction optics
Li et al. Deep-learning-based optical image hiding
CN100516978C (en) Mixed optical wavelet conversion method based on white light and monochromatic light
Feng et al. Optical Neural Networks for Holographic Image Recognition
CN116703728B (en) Super-resolution method and system for optimizing system parameters
CN117521746B (en) Quantized optical diffraction neural network system and training method thereof
CN111949067B (en) Dammann convolution optical computer
Bu Hybrid Neural Networks With Nonlinear Optics and Spatial Modes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210907