CN113517926A - Optical performance monitoring method, electronic device and computer readable storage medium - Google Patents

Optical performance monitoring method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN113517926A
CN113517926A CN202110744536.4A CN202110744536A CN113517926A CN 113517926 A CN113517926 A CN 113517926A CN 202110744536 A CN202110744536 A CN 202110744536A CN 113517926 A CN113517926 A CN 113517926A
Authority
CN
China
Prior art keywords
complex
neural network
signal
optical
optical signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110744536.4A
Other languages
Chinese (zh)
Inventor
揭水平
高明义
符小东
徐林鹏
沈一春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongtian Communication Technology Co ltd
Jiangsu Zhongtian Technology Co Ltd
Zhongtian Broadband Technology Co Ltd
Original Assignee
Zhongtian Communication Technology Co ltd
Jiangsu Zhongtian Technology Co Ltd
Zhongtian Broadband Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongtian Communication Technology Co ltd, Jiangsu Zhongtian Technology Co Ltd, Zhongtian Broadband Technology Co Ltd filed Critical Zhongtian Communication Technology Co ltd
Priority to CN202110744536.4A priority Critical patent/CN113517926A/en
Publication of CN113517926A publication Critical patent/CN113517926A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • H04B10/07953Monitoring or measuring OSNR, BER or Q
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optical Communication System (AREA)

Abstract

The application provides an optical performance monitoring method, an electronic device and a computer readable storage medium, wherein the optical performance monitoring method comprises the following steps: preprocessing the received optical signals to obtain complex signals; generating a constellation diagram based on the complex signal; converting the constellation diagram into a gray level diagram, and extracting a gray level value from the gray level diagram; converting the extracted gray value into a complex value by using a preset conversion algorithm; and inputting the complex value into a pre-established complex value neural network to obtain the modulation format and/or the optical signal to noise ratio of the optical signal. The method and the device utilize the complex value neural network to realize modulation format identification and/or estimation of the optical signal to noise ratio, and adopt a picture processing mode similar to the neural network to process the optical signal data, so that the processed data can be applied to the complex value neural network, more characteristic information of original data can be reserved, and the method and the device are high in optical performance monitoring precision and low in cost.

Description

Optical performance monitoring method, electronic device and computer readable storage medium
Technical Field
The present application relates to the field of optical communication technologies, and in particular, to an optical performance monitoring method, an electronic device, and a computer-readable storage medium.
Background
Optical performance monitoring is an important component in optical network systems. The development of optical performance monitoring is crucial to the development of digital signal processing technology. In coherent Optical communication, it is important To select a superior Modulation Format Identification (MFI) and Optical Signal To Noise Ratio (OSNR) estimation method. Several better schemes have been proposed, for example, a deep neural network using amplitude histogram information of a signal as an input to implement modulation format recognition and osnr monitoring, but the schemes trade complexity for monitoring effectiveness, and an artificial neural network or a support vector machine based on a cumulative distribution function to implement modulation format recognition and osnr monitoring, but the schemes show certain limitations in terms of mass data processing and noise immunity.
Disclosure of Invention
In view of the foregoing, the present application provides an optical performance monitoring method, an electronic device, and a computer-readable storage medium, which utilize a complex neural network to implement modulation format identification and/or estimation of an optical signal-to-noise ratio, and have high optical performance monitoring accuracy and good noise tolerance.
An embodiment of the present application provides an optical performance monitoring method, including: preprocessing the received optical signals to obtain complex signals; generating a constellation diagram based on the complex signal; converting the constellation diagram into a gray level diagram, and extracting a gray level value from the gray level diagram; converting the extracted gray value into a complex value by using a preset conversion algorithm; and inputting the complex value into a pre-established complex value neural network to obtain the modulation format and/or the optical signal to noise ratio of the optical signal.
In some embodiments, the pre-processing the received optical signal to obtain a complex signal includes: converting the received optical signal into an electrical signal; performing analog-to-digital conversion processing on the electric signal to obtain a digital signal; and carrying out dispersion compensation and internal clock recovery processing on the digital signal to obtain the complex signal.
In some embodiments, the generating a constellation based on the complex signal comprises: and extracting a preset number of complex signal values from the complex signals to draw a ring-shaped constellation diagram.
In some embodiments, the converting the constellation map into a gray scale map and extracting gray scale values from the gray scale map includes: and converting the constellation diagram into a gray diagram of m × n pixels, and extracting m × n gray values from the gray diagram, wherein m and n are positive integers greater than 1.
In some embodiments, the inputting the complex value into the pre-established complex value neural network to obtain the modulation format and/or the optical signal-to-noise ratio of the optical signal includes: inputting the complex value to the first complex value neural network to obtain a modulation format of the optical signal; and inputting the complex value to the second complex value neural network to obtain the optical signal to noise ratio of the optical signal.
In some embodiments, the optimization algorithms in the first and second complex-valued neural networks are both Broyden-Fletcher-golden farb-Shanno (L-BFGS) algorithms with limited memory.
In some embodiments, the pixels of the gray scale map are m × n, the first complex-valued neural network is trained from sample optical signals of multiple modulation formats and m × n complex values corresponding to the sample optical signals, the first complex-valued neural network includes a first input layer, a first hidden layer, and a first fully-connected output layer, and the first input layer includes m × n neurons.
In some embodiments, the pixels of the grayscale map are m × n, the second complex-valued neural network is trained from sample optical signals of multiple osnr intervals and m × n complex values corresponding to the sample optical signals, the second complex-valued neural network includes a second input layer, a second hidden layer, and a second fully-connected output layer, and the second input layer includes m × n neurons.
An embodiment of the present application provides a computer-readable storage medium, which stores computer instructions, and when the computer instructions are executed on an electronic device, the electronic device executes the above-mentioned optical performance monitoring method.
An embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory is used to store instructions, and the processor is used to call the instructions in the memory, so that the electronic device executes the above-mentioned optical performance monitoring method.
According to the optical performance monitoring method, the electronic device and the computer-readable storage medium, modulation format identification and/or estimation of the optical signal-to-noise ratio are/is realized by using the complex-valued neural network containing the L-BFGS algorithm, the memory occupation is small, the method is more suitable for small-scale optical module development, and the optical signal data is processed by adopting a picture processing mode similar to the neural network, so that the processed data can be applied to the complex-valued neural network, more characteristic information of original data can be reserved, and the optical performance monitoring method, the electronic device and the computer-readable storage medium are high in optical performance monitoring precision, low in cost and good in noise tolerance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for monitoring optical performance according to an embodiment of the present disclosure.
Fig. 2a and 2b are schematic diagrams illustrating an architecture of a first complex neural network and a second complex neural network, respectively, according to an embodiment of the present disclosure.
FIG. 3 shows the modulation format recognition accuracy curves for QPSK, 8-QAM, 16-QAM, 32-QAM, 64-QAM and 128-QAM at different optical signal-to-noise ratios according to an embodiment of the present application.
Fig. 4 is a graph comparing an estimated optical signal-to-noise ratio value with a true optical signal-to-noise ratio value according to an embodiment of the present application.
Fig. 5 is a functional block diagram of an optical performance monitoring apparatus according to an embodiment of the present application.
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, a detailed description of the present application will be given below with reference to the accompanying drawings and detailed description. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present application, and the described embodiments are merely a subset of the embodiments of the present application, rather than all embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The optical performance monitoring method can be applied to one or more electronic devices. The electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a Processor, a micro programmed Control Unit (MCU), an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device may be a desktop computer, an optical communication device, a server, or other computing device.
FIG. 1 is a flowchart illustrating steps of an embodiment of a method for monitoring optical performance according to the present application. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
Referring to fig. 1, the optical performance monitoring method may include the following steps.
Step S11, the received optical signal is preprocessed to obtain a complex signal.
In one embodiment, the received optical signal may be implemented using an existing optical receiver, for example, the optical receiver may include a photodetector, a mixer, and a balanced detection diode.
In an embodiment, the pre-processing may include one or more of a photoelectric conversion process, an analog-to-digital conversion process, a dispersion compensation process, and an internal clock recovery process. For example, preprocessing the received optical signal to obtain a complex signal may include: a is1Converting the received optical signal into an electrical signal; a is2Carrying out analog-to-digital conversion processing on the electric signal to obtain a digital signal; a is3And carrying out dispersion compensation and internal clock recovery processing on the digital signals to obtain complex signals, and recovering original data signals transmitted by the light transmitting end from the received optical signals.
Step S12 is to generate a constellation based on the complex signal.
In one embodiment, a predetermined number of complex signal values may be extracted from the complex signal to render a constellation diagram of a ring. The preset number can be set according to actual requirements, for example, 10000 complex signal values are extracted from the complex signal to draw a ring-shaped constellation diagram.
And step S13, converting the constellation diagram into a gray-scale diagram, and extracting gray-scale values from the gray-scale diagram.
In one embodiment, the constellation diagram may be converted into a gray scale diagram (gray scale of 0 to 255) of m × n pixels, where m and n are positive integers, and the values of m and n may be set according to the actual optical signal processing requirement. For example, the constellation map may be converted into a grayscale map of 50 × 50 pixels, and then 2500 grayscale values may be extracted from the grayscale map.
And step S14, converting the extracted gray value into a complex value by using a preset conversion algorithm.
In one embodiment, the predetermined transformation algorithm may be selected according to actual requirements, for example, the predetermined transformation algorithm may include the following equations (i), (ii):
θ=π*(x-a)/(b-a)…(i);
z=e=cosθ+i*sinθ…(ii);
wherein z is a complex value, x is a gray value, i is an imaginary unit, a is 0, and b is 255. The conversion of the extracted 2500 gray values into the corresponding 2500 complex values can be realized by the above equations (i) and (ii).
And step S15, inputting the complex value obtained by conversion into a pre-established complex value neural network to obtain the modulation format and/or optical signal-to-noise ratio of the optical signal.
In one embodiment, the pre-established complex-valued neural network may include a first complex-valued neural network and a second complex-valued neural network. The complex value obtained by conversion in step 14 may be input to a first complex value neural network established in advance to obtain a modulation format of the optical signal, and the complex value obtained by conversion in step 14 may be input to a second complex value neural network established in advance to obtain an optical signal-to-noise ratio of the optical signal.
Fig. 2a shows a network architecture of a first complex neural network 11 according to an embodiment of the present application.
The first complex-valued neural network 11 includes a first input layer 111, a first hidden layer 112, and a first fully-connected output layer 113. The training data of the first complex-valued neural network 11 may include sample optical signals of multiple modulation formats, and the sample optical signals may be processed to obtain a complex value corresponding to each sample optical signal, and the complex value corresponding to each sample optical signal is used as an input of the first complex-valued neural network 11, so as to implement training of the first complex-valued neural network 11.
For example, the sample optical signals of the plurality of modulation formats are sample optical signal lines of six modulation formats. The six Modulation formats include Quadrature Phase Shift Keying (QPSK), 8-Quadrature Amplitude Modulation (QAM), 16-QAM, 32-QAM, 64-QAM, and 128-QAM. Namely, the trained first complex-valued neural network 11 can recognize the optical signals of six modulation formats.
In one embodiment, as shown in fig. 2a, the first input layer 111 comprises 2500 neurons, 2500 neurons corresponding to 2500 complex values, the first hidden layer 112 comprises 19 neurons, the first fully-connected output layer 113 comprises 6 types of outputs, and the 6 types of outputs correspond to six modulation formats.
In one embodiment, the training process of the first complex-valued neural network 11 may include:
s1for training data of six modulation formats, each modulation format may take 300 sets of data sets, respectively, where 70% (210 sets) of each modulation format is used as the training set, the remaining 30% (90 sets) are used as the test set, i.e., 1260 sets in total are used as the training set, 540 sets are used as the test set;
s2initializing the weighting factor W of the first complex-valued neural network 111、W2And an offset value b1、b2Weight coefficient W1And an offset value b1The weighting factor W may depend on the input complex value and the size of the first hidden layer 1122And an offset value b2May depend on the size of the first hidden layer 112 and the first fully connected output layer 113;
for example, the first input layer 111 comprises 2500 neurons, the first hidden layer 112 comprises 19 neurons, W1Can be calculated by the following formula (iii), b1Can be calculated by the following equation (iv):
W1=(c+)d-c(*A)+j*(c+(d-c)*A)…(iii);
b1=(c+(d-c)*B)+j*(c+(d-c)*B)…(iv);
wherein, A is a matrix of 19 × 2500, B is a matrix of 19 × 1, j is an imaginary unit, a is-0.1, and B is 0.1;
s3root ofTraining the first complex neural network 11 according to the input data of the training set, and updating the weight coefficient W by using the split-sigmoid of the activation function1、W2And an offset value b1、b2Then, through the backward propagation output parameter DW1C, the parameter DW2C, the parameter DB1C and the parameter DB2C, new weight coefficients W can be continuously updated according to the network learning rate1、W2And an offset value b1、b2
For example, the loss function L may be defined in terms of the minimum mean square error, the loss function L being:
Figure BDA0003143961380000071
wherein P is the number of training data, OPIn order to train the data in the form of,
Figure BDA0003143961380000072
fitting training data;
for example, the continuous update of the new weight coefficient W according to the net learning rate can be realized by the following equation1、W2And an offset value b1、b2
W1-new=W1-learning_rate*DW1C;
W2-new=W2-learning_rate*DW2C;
b1-new=b1-learning_rate*DB1C;
b2-new=b2-learning_rate*DB2C;
Wherein, W1-newIs to a weight coefficient W1Updated weight coefficient, W2-newIs to a weight coefficient W2Updated weight coefficient, b1-newIs to the offset value b1Offset value obtained by updating b2-newIs to the offset value b2Updating to obtain an offset value, wherein the learning _ rate is a network learning rate; the network learning rate can be set according to actual requirements, for example, the network learning rate can be set to 0.01;
s4weighting factor W1、W2And an offset value b1、b2Substituting the optimal solution into an L-BFGS algorithm to assist in completing the training of the first complex value neural network 11;
the memory length of the L-BFGS algorithm can be set to be 6-10, for example, in a first complex value neural network 11, the memory length is set to be 6, and iteration is carried out for 100 times; the first complex value neural network 11 adopts the L-BFGS algorithm to solve the optimal solution, has high convergence speed and less memory overhead, not only eliminates the problem that the first-order gradient descent algorithm is easy to generate overfitting, but also solves the problem that the second-order Newton algorithm has high complexity, has higher practicability and is more suitable for developing small-scale optical modules;
s5testing the trained first complex neural network 11 by using the data of the test set, counting the accuracy of the test data, judging that the trained first complex neural network 11 meets the requirement when the accuracy meets the preset requirement, otherwise, adjusting the network parameters (such as the neuron number of the hidden layer) of the trained first complex neural network 11, and training the trained first complex neural network by using the data of the training set again until the accuracy obtained by the test meets the preset requirement.
It is understood that the application is not limited to the first input layer 111 having 2500 neurons, the first hidden layer 112 having 19 neurons, and the first fully connected output layer 113 having 6 types of outputs, and the number of neurons of the first input layer 111 and the first hidden layer 112 and the number of output types of the first fully connected output layer 113 can be adjusted according to the actual signal processing effect or the signal processing requirement. For example, if the first complex-valued neural network 11 is provided with optical signals identifying eight modulation formats, the first fully-connected output layer 113 has 8 types of outputs, the first input layer 111 may have 2600 neurons and the first hidden layer 112 has 20 neurons for processing the optical signals into corresponding 2600 complex values.
In one embodiment, the second complex-valued neural network 12 includes a second input layer 121, a second hidden layer 122 and a second fully-connected output layer 123. The training data of the second complex valued neural network 12 may include sample optical signals of various optical signal to noise ratio intervals, and the sample optical signals may be processed to obtain a complex value corresponding to each sample optical signal, and the complex value corresponding to each sample optical signal is used as an input of the second complex valued neural network 12, so as to train the second complex valued neural network 12. As shown in fig. 2b, the second input layer 121 includes 2500 neurons, 2500 neurons correspond to 2500 complex values, the second hidden layer 122 includes 21 neurons, the second fully-connected output layer 123 includes 8 types of outputs, and the 8 types of outputs correspond to eight optical signal-to-noise ratio intervals. The training mode of the second complex-valued neural network 12 can refer to the training process of the first complex-valued neural network 11 for training, and is not described herein again.
For example, the training data of the second complex-valued neural network 12 is the three modulation formats of QPSK, 16-QAM and 64-QAM, the osnr intervals are eight osnr ranges that are commonly used respectively, each osnr interval takes 200 groups of data sets, 70% (140 groups) of each osnr interval is used as a training set, 30% (60 groups) is used as a test set, that is, 1120 groups are included as a training set, 480 groups are used as a test set, the network learning rate is set to 0.01, the memory length of the L-BFGS algorithm in the second complex-valued neural network 12 is set to 10, 400 iterations are performed, and the activation function is a split-sigmoid function.
After the first complex-valued neural network 11 is obtained through training, the complex value obtained by converting the received optical signal can be input to the first complex-valued neural network 11, so as to obtain the modulation format of the optical signal. After the second complex-valued neural network 12 is obtained through training, the complex value obtained by converting the received optical signal can be input to the second complex-valued neural network 12, so as to obtain the optical signal-to-noise ratio of the optical signal.
As shown in FIG. 3, the accuracy curves are identified for the modulation formats of QPSK, 8-QAM, 16-QAM, 32-QAM, 64-QAM and 128-QAM of the present application at different optical signal-to-noise ratios. As shown in fig. 3, for QPSK, near 100% accuracy can be achieved at different osnr; for 8-QAM, the accuracy can be kept above 90% when the optical signal-to-noise ratio is 8dB, and the accuracy of 8-QAM can reach 100% when the optical signal-to-noise ratio is 12 dB; for 16-QAM, when the optical signal-to-noise ratio is 12dB, the 16-QAM can achieve 100% accuracy; for 32-QAM, more than 90% accuracy can be maintained when the optical signal-to-noise ratio is 12 dB; for 64-QAM, more than 90% accuracy can be maintained at 15dB, for 128-QAM, near 100% accuracy can be achieved at 15dB, and 94.4% accuracy can be achieved at 12 dB. The scheme of the application shows excellent noise tolerance performance during modulation format identification, and the identification rate of each modulation format can be increased along with the increase of the optical signal to noise ratio.
Fig. 4 is a graph of estimated optical signal-to-noise ratio values versus true optical signal-to-noise ratios for the present application. Taking the commonly used QPSK, 16-QAM and 64-QAM as examples, the OSNR ranges of the QPSK, 16-QAM and 64-QAM are set to be 11-18 dB, 16-23 dB and 21-28 dB respectively. The result is shown in fig. 4, where the straight line is the true OSNR value, and it can be seen that the estimated OSNR value of the present application is substantially consistent with the true OSNR value. The mean estimation errors for QPSK, 16-QAM and 64-QAM were 0.06dB, 0.05dB and 0.067dB, respectively.
According to the optical performance monitoring method, modulation format identification and/or estimation of an optical signal-to-noise ratio are/is realized by using the complex-valued neural network containing the L-BFGS algorithm, the memory occupation is small, the method is more suitable for small-scale optical module development, and the optical signal data is processed by adopting a picture processing mode similar to the neural network, so that the processed data can be applied to the complex-valued neural network, more characteristic information of original data can be reserved, and the optical performance monitoring method is high in optical performance monitoring precision, low in cost and good in noise tolerance.
FIG. 5 is a functional block diagram of a preferred embodiment of an optical performance monitoring apparatus according to the present application.
Referring to fig. 5, the optical performance monitoring apparatus 10 is applied to an electronic device. The optical performance monitoring device 10 may include one or more modules. For example, referring to fig. 5, the optical performance monitoring apparatus 10 may include a receiving module 101, a generating module 102, a first converting module 103, a second converting module 104, and a processing module 105.
It is understood that, corresponding to the embodiments of the optical performance monitoring method, the optical performance monitoring apparatus 10 may include some or all of the functional modules shown in fig. 5, and the functions of the modules 101 to 105 will be described in detail below. It should be noted that the same nouns and specific explanations thereof in the above embodiments of the optical performance monitoring method can also be applied to the following functional descriptions of the modules 101 to 105. For brevity and to avoid repetition, further description is omitted.
The receiving module 101 is configured to receive an optical signal and preprocess the received optical signal to obtain a complex signal.
In an embodiment, the receiving module 101 may include a photodetector 1011, a mixer 1012, a balanced detection diode 1013, an analog-to-digital conversion unit 1014, a dispersion compensation 1015, and a clock recovery 1016, and further the receiving module 101 may receive the optical signal, convert the received optical signal into an electrical signal, perform analog-to-digital conversion on the electrical signal to obtain a digital signal, perform dispersion compensation and internal clock recovery on the digital signal to obtain a complex signal, and recover the original data signal transmitted by the optical transmitting end.
The generating module 102 is configured to generate a constellation based on the complex signal.
In an embodiment, the generating module 102 may extract a predetermined number of complex signal values from the complex signal to draw a ring-shaped constellation. The preset number may be set according to actual requirements, for example, the generating module 102 extracts 10000 complex signal values from the complex signal to draw a ring-shaped constellation diagram.
The first conversion module 103 is configured to convert the constellation map into a gray scale map and extract a gray scale value from the gray scale map.
In one embodiment, the first conversion module 103 may convert the constellation diagram into a gray scale diagram (with gray scale of 0 to 255) of m × n pixels, where m and n are positive integers, and the values of m and n may be set according to actual optical signal processing requirements. For example, the first conversion module 103 may convert the constellation map into a grayscale map of 50 × 50 pixels, and then may extract 2500 grayscale values from the grayscale map.
The second conversion module 104 is configured to convert the extracted gray-scale value into a complex value by using a preset conversion algorithm.
In one embodiment, the predetermined conversion algorithm may include the following equations (i), (ii):
θ=π*(x-a)/(b-a)…(i);
z=e=cosθ+i*sinθ…(ii);
wherein z is a complex value, x is a gray value, i is an imaginary unit, a is 0, and b is 255. The second conversion module 104 can convert the extracted 2500 gray values into corresponding 2500 complex values by the above equations (i) and (ii).
The processing module 105 is configured to input the complex value obtained by the conversion into a complex value neural network established in advance, so as to obtain a modulation format and/or an optical signal-to-noise ratio of the optical signal.
In one embodiment, the pre-established complex-valued neural network may include a first complex-valued neural network and a second complex-valued neural network. The processing module 105 may input the complex value obtained by the conversion of the second conversion module 104 into a first complex value neural network established in advance to obtain a modulation format of the optical signal, and input the complex value obtained by the conversion of the second conversion module 104 into a second complex value neural network established in advance to obtain an optical signal-to-noise ratio of the optical signal. And the optimization algorithms in the first complex value neural network and the second complex value neural network are both L-BFGS algorithms.
According to the optical performance monitoring device, modulation format identification and/or estimation of an optical signal-to-noise ratio are/is realized by using the complex-valued neural network comprising the L-BFGS algorithm, the memory occupation is small, the device is more suitable for small-scale optical module development, and the optical signal data is processed by adopting a picture processing mode similar to the neural network, so that the processed data can be applied to the complex-valued neural network, more characteristic information of original data can be reserved, and the optical performance monitoring device is high in optical performance monitoring precision, low in cost and good in noise tolerance.
FIG. 6 is a diagram of an electronic device according to a preferred embodiment of the present application.
The electronic device 100 comprises a memory 20, a processor 30 and a computer program 40, such as the optical performance monitoring apparatus 10, stored in the memory 20 and executable on the processor 30. The processor 30, when executing the computer program 40, implements the steps of the above-described optical performance monitoring method embodiments, such as the steps S11-S15 shown in fig. 1. Alternatively, the processor 30 executes the computer program 40 to implement the functions of the modules in the above-mentioned optical performance monitoring apparatus embodiment, such as the modules 101 to 105 in fig. 5.
In one embodiment, some of the modules (101-105) shown in FIG. 5 may be executed by the processor 30, and another part may be executed by other hardware (e.g., photo-detectors, mixers, balanced detection diodes, analog-to-digital converters, etc.). For example, modules 102-105 are executed by processor 30.
Illustratively, the computer program 40 may be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 30 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 40 in the electronic device 100. For example, the computer program 40 may be divided into a receiving module 101, a generating module 102, a first converting module 103, a second converting module 104 and a processing module 105 in fig. 5. The specific functions of the modules are referred to the above embodiments.
The electronic device 100 may be a desktop computer, an optical communication device, a server, or like computing device. Those skilled in the art will appreciate that the schematic diagram is merely an example of the electronic device 100 and does not constitute a limitation of the electronic device 100 and may include more or less components than those shown, or combine certain components, or different components, e.g., the electronic device 100 may also include input-output devices, network access devices, buses, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor 30 may be any conventional processor or the like, the processor 30 being the control center for the electronic device 100 and the various interfaces and lines connecting the various parts of the overall electronic device 100.
The memory 20 may be used to store the computer program 40 and/or the module/unit, and the processor 30 may implement various functions of the electronic device 100 by running or executing the computer program and/or the module/unit stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the electronic apparatus 100, and the like. In addition, the memory 20 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other non-volatile solid state storage device.
The integrated modules/units of the electronic device 100 may be stored in a computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
In the several embodiments provided in the present application, it should be understood that the disclosed electronic device and method may be implemented in other ways. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the units is only one logical function division, and there may be other division ways in actual implementation.
In addition, functional units in the embodiments of the present application may be integrated into the same processing unit, or each unit may exist alone physically, or two or more units are integrated into the same unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (10)

1. A method of monitoring optical performance, comprising:
preprocessing the received optical signals to obtain complex signals;
generating a constellation diagram based on the complex signal;
converting the constellation diagram into a gray level diagram, and extracting a gray level value from the gray level diagram;
converting the extracted gray value into a complex value by using a preset conversion algorithm;
and inputting the complex value into a pre-established complex value neural network to obtain the modulation format and/or the optical signal to noise ratio of the optical signal.
2. The optical performance monitoring method of claim 1, wherein the pre-processing the received optical signal to obtain a complex signal comprises:
converting the received optical signal into an electrical signal;
performing analog-to-digital conversion processing on the electric signal to obtain a digital signal;
and carrying out dispersion compensation and internal clock recovery processing on the digital signal to obtain the complex signal.
3. The optical performance monitoring method of claim 1 or 2, wherein the generating a constellation diagram based on the complex signal comprises:
and extracting a preset number of complex signal values from the complex signals to draw a ring-shaped constellation diagram.
4. The optical performance monitoring method of claim 1, wherein the converting the constellation diagram into a gray scale diagram and extracting gray scale values from the gray scale diagram comprises:
and converting the constellation diagram into a gray diagram of m × n pixels, and extracting m × n gray values from the gray diagram, wherein m and n are positive integers greater than 1.
5. The method according to claim 1, wherein the pre-established complex neural network comprises a first complex neural network and a second complex neural network, and the inputting the complex value into the pre-established complex neural network to obtain the modulation format and/or the optical signal-to-noise ratio of the optical signal comprises:
inputting the complex value to the first complex value neural network to obtain a modulation format of the optical signal;
and inputting the complex value to the second complex value neural network to obtain the optical signal to noise ratio of the optical signal.
6. The method for monitoring optical performance of claim 5, wherein the optimization algorithms in the first complex-valued neural network and the second complex-valued neural network are each a Broyden-Fletcher-golden farb-Shanno (L-BFGS) algorithm with limited memory.
7. The method according to claim 5, wherein the gray scale map has m x n pixels, the first complex neural network is trained from m x n complex values corresponding to the sample optical signals and the sample optical signals in the plurality of modulation formats, the first complex neural network comprises a first input layer, a first hidden layer and a first fully-connected output layer, and the first input layer comprises m x n neurons.
8. The method according to claim 5, wherein the pixels of the gray scale map are m x n, the second complex-valued neural network is trained from the sample optical signals of the multiple osnr intervals and m x n complex values corresponding to the sample optical signals, the second complex-valued neural network includes a second input layer, a second hidden layer, and a second fully-connected output layer, and the second input layer includes m x n neurons.
9. A computer readable storage medium storing computer instructions that, when run on an electronic device, cause the electronic device to perform the optical performance monitoring method of any one of claims 1-8.
10. An electronic device comprising a processor and a memory, the memory storing instructions, wherein the processor is configured to invoke the instructions in the memory such that the electronic device performs the optical performance monitoring method of any one of claims 1 to 8.
CN202110744536.4A 2021-07-01 2021-07-01 Optical performance monitoring method, electronic device and computer readable storage medium Pending CN113517926A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110744536.4A CN113517926A (en) 2021-07-01 2021-07-01 Optical performance monitoring method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110744536.4A CN113517926A (en) 2021-07-01 2021-07-01 Optical performance monitoring method, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113517926A true CN113517926A (en) 2021-10-19

Family

ID=78066372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110744536.4A Pending CN113517926A (en) 2021-07-01 2021-07-01 Optical performance monitoring method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113517926A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115549780A (en) * 2022-08-30 2022-12-30 北京邮电大学 Method and device for monitoring performance parameters of optical communication network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090028554A1 (en) * 2005-10-13 2009-01-29 National Ict Australia Limited Method and apparatus for sampled optical signal monitoring
CN102035609A (en) * 2010-12-15 2011-04-27 南京邮电大学 Signal blind detection method based on a plurality of continuous unity feedback neural networks
CN110324080A (en) * 2019-06-28 2019-10-11 北京邮电大学 A kind of method, apparatus of optical information networks, electronic equipment and medium
WO2020149953A1 (en) * 2019-01-14 2020-07-23 Lightelligence, Inc. Optoelectronic computing systems
US20200327397A1 (en) * 2019-04-12 2020-10-15 Motorola Solutions, Inc. Systems and methods for modulation classification of baseband signals using multiple data representations of signal samples

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090028554A1 (en) * 2005-10-13 2009-01-29 National Ict Australia Limited Method and apparatus for sampled optical signal monitoring
CN102035609A (en) * 2010-12-15 2011-04-27 南京邮电大学 Signal blind detection method based on a plurality of continuous unity feedback neural networks
WO2020149953A1 (en) * 2019-01-14 2020-07-23 Lightelligence, Inc. Optoelectronic computing systems
US20200327397A1 (en) * 2019-04-12 2020-10-15 Motorola Solutions, Inc. Systems and methods for modulation classification of baseband signals using multiple data representations of signal samples
CN110324080A (en) * 2019-06-28 2019-10-11 北京邮电大学 A kind of method, apparatus of optical information networks, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115549780A (en) * 2022-08-30 2022-12-30 北京邮电大学 Method and device for monitoring performance parameters of optical communication network

Similar Documents

Publication Publication Date Title
CN110289927B (en) Channel simulation realization method for generating countermeasure network based on condition
CN110569752A (en) convolutional neural network-based radar signal category determination method
CN110598677A (en) Space-time multi-channel deep learning system for automatic modulation recognition
CN111935042A (en) Probability shaping identification system and method based on machine learning and receiving end
CN110046622A (en) A kind of attack sample generating method, device, equipment and storage medium having target
CN110324080A (en) A kind of method, apparatus of optical information networks, electronic equipment and medium
CN113919401A (en) Modulation type identification method and device based on constellation diagram characteristics and computer equipment
CN114157539A (en) Data-aware dual-drive modulation intelligent identification method
CN113517926A (en) Optical performance monitoring method, electronic device and computer readable storage medium
CN114861875A (en) Internet of things intrusion detection method based on self-supervision learning and self-knowledge distillation
CN114972886A (en) Image steganography analysis method
Yu et al. A multi-task learning CNN for image steganalysis
CN113076925B (en) M-QAM signal modulation mode identification method based on CNN and ELM
CN114004250A (en) Method and system for identifying open set of modulation signals of deep neural network
Adzhemov et al. Type recognition of the digital modulation of radio signals using neural networks
Yang et al. Unsupervised neural network for modulation format discrimination and identification
Zhou et al. Automatic modulation classification with genetic backpropagation neural network
CN113869227B (en) Signal modulation mode identification method, device, equipment and readable storage medium
CN110210536A (en) A kind of the physical damnification diagnostic method and device of optical interconnection system
Al‐Makhlasawy et al. Deep learning for wireless modulation classification based on discrete wavelet transform
CN113852434A (en) LSTM and ResNet assisted deep learning end-to-end intelligent communication method and system
CN113472713A (en) High-order modulation signal demodulation method based on neural network and receiver
Li et al. Modulation recognition analysis based on neural networks and improved model
CN114338093B (en) Method for transmitting multi-channel secret information through capsule network
Abd-Elaziz et al. Deep Learning-Based Automatic Modulation Classification Using Robust CNN Architecture for Cognitive Radio Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211019