CN113766669A - Large-scale random access method based on deep learning network - Google Patents

Large-scale random access method based on deep learning network Download PDF

Info

Publication number
CN113766669A
CN113766669A CN202111323583.8A CN202111323583A CN113766669A CN 113766669 A CN113766669 A CN 113766669A CN 202111323583 A CN202111323583 A CN 202111323583A CN 113766669 A CN113766669 A CN 113766669A
Authority
CN
China
Prior art keywords
neural network
user
matrix
random access
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111323583.8A
Other languages
Chinese (zh)
Other versions
CN113766669B (en
Inventor
黄川�
崔曙光
黄坚豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Original Assignee
Chinese University of Hong Kong Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202111323583.8A priority Critical patent/CN113766669B/en
Publication of CN113766669A publication Critical patent/CN113766669A/en
Application granted granted Critical
Publication of CN113766669B publication Critical patent/CN113766669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access
    • H04W74/08Non-scheduled access, e.g. ALOHA
    • H04W74/0833Random access procedures, e.g. with 4-step access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a large-scale random access method based on a deep learning network, which comprises the following steps: constructing a system model based on large-scale random access; constructing a transmit signal to a user using a deep neural network
Figure 618710DEST_PATH_IMAGE001
A model for detection and user access judgment; carrying out neural network training and parameter updating; and detecting the user emission signal according to the neural network after the training update, thereby judging whether the user is successfully accessed. In the large-scale random access scheme provided by the invention, a decoding algorithm with low complexity is provided, the communication performance is effectively improved, and particularly, compared with the traditional algorithm, the detection algorithm based on the neural network does not need to be carried out before a channelThe statistical property is tested, the system loss can be greatly reduced, the method is more suitable for an actual communication system, and in addition, the proposed algorithm has higher robustness than the traditional algorithm, namely when the prior knowledge of the system is incomplete, the algorithm provides better performance.

Description

Large-scale random access method based on deep learning network
Technical Field
The invention relates to a deep learning network, in particular to a large-scale random access method based on the deep learning network.
Background
With the rapid development of communication technology, base stations are more and more widely applied in social life, and the base stations are often required to be accessed to a large number of users and support uplink transmission of the large number of users; the access method of the user is very important at this time.
The traditional access strategy and the data transmission strategy are independent and are divided into two steps: firstly, active users are detected, and then channel estimation and data detection are carried out on the detected active users. This discrete strategy requires the user to complete activity detection and channel estimation through the pilot before data transmission, which can generate huge time delay and performance overhead. Therefore, it is difficult for such a conventional communication mode to satisfy the communication demand of high energy efficiency and low communication delay in a large-scale scenario. In addition, conventional access algorithms often need to know the statistical properties of the channel and the user activity characteristics, which is difficult to implement in practical situations.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a large-scale random access method based on a deep learning network, provides a low-complexity decoding scheme and effectively improves the communication performance.
The purpose of the invention is realized by the following technical scheme: a large-scale random access method based on a deep learning network comprises the following steps:
s1, constructing a system model based on large-scale random access;
s2, constructing a transmitting signal for a user by utilizing a deep neural network
Figure 202134DEST_PATH_IMAGE001
A model for detection and user access judgment;
s3, carrying out neural network training and parameter updating;
and S4, detecting the signal emitted by the user according to the neural network after the training and updating, thereby judging whether the user is successfully accessed.
Further, the step S1 includes the following sub-steps:
s101, for the content containing
Figure 723245DEST_PATH_IMAGE002
Communication system comprising a single antenna subscriber and a receiver, each subscriber being randomly connected to the receiver, i.e. transmitting information to the receiver with a certain probability in each transmission time slot, wherein the receiver is provided with
Figure 691070DEST_PATH_IMAGE003
A root antenna; by random variables
Figure 145185DEST_PATH_IMAGE004
To describe the user
Figure 658206DEST_PATH_IMAGE005
The active nature of the slot, at each time slot,
Figure 682925DEST_PATH_IMAGE004
satisfies the following conditions:
Figure 939594DEST_PATH_IMAGE006
s102, each user adopts a random access scheme based on free access; each user is pre-assigned a dedicated pilot sequence prior to transmission
Figure 248216DEST_PATH_IMAGE007
Wherein
Figure 181405DEST_PATH_IMAGE008
For pilot length, symbols
Figure 208267DEST_PATH_IMAGE009
Representative length of
Figure 3048DEST_PATH_IMAGE008
A set of complex sequences of (a); the elements of each pilot being derived from an independent identically distributed gaussian distribution, i.e.
Figure 179558DEST_PATH_IMAGE010
Wherein the symbol
Figure 34382DEST_PATH_IMAGE011
Represents a mean of 0 and a variance of
Figure 282960DEST_PATH_IMAGE012
The complex gaussian distribution of (a) is,
Figure 130699DEST_PATH_IMAGE013
representative dimension of
Figure 413913DEST_PATH_IMAGE008
The identity matrix of (1); storing the pilot sequences of all users in a receiving end;
s103, each active user synchronously transmits pilot frequency sequence and information in each transmission time slot
Figure 439638DEST_PATH_IMAGE014
To the receiving end, the received signal is represented as
Figure 175513DEST_PATH_IMAGE015
Order to
Figure 328408DEST_PATH_IMAGE016
Obtaining a matrix expression of the received signal,
Figure 466128DEST_PATH_IMAGE017
wherein
Figure 662754DEST_PATH_IMAGE018
Is Gaussian noise, each element satisfies the conditions that the mean value of independent equal distribution is zero and the variance is
Figure 135193DEST_PATH_IMAGE019
(ii) a gaussian distribution of;
Figure 606625DEST_PATH_IMAGE020
representing a usernThe channel parameters to the receiving end are,
Figure 333273DEST_PATH_IMAGE021
to indicate a length of
Figure 714182DEST_PATH_IMAGE022
And is unknown at the receiving end, is set
Figure 159070DEST_PATH_IMAGE023
For the usernIs transmitted. In which the signal is transmitted
Figure 168614DEST_PATH_IMAGE023
Is generated from the following codebook:
Figure 999036DEST_PATH_IMAGE024
wherein
Figure 68623DEST_PATH_IMAGE025
Is the first
Figure 735228DEST_PATH_IMAGE026
A number of modulation code words is modulated,
Figure 548463DEST_PATH_IMAGE027
is a usernThe rate of transmission of (a) is,
Figure 435DEST_PATH_IMAGE028
representing the user as inactive, i.e. inactive
Figure 975345DEST_PATH_IMAGE029
Further, the step S2 includes the following sub-steps:
s201, initialization: inputting a received signal
Figure 394825DEST_PATH_IMAGE030
Sparse parameters of usersgRate per user
Figure 995439DEST_PATH_IMAGE031
(ii) a Initialization order
Figure 551185DEST_PATH_IMAGE032
S202, firstly, receiving signals
Figure 962575DEST_PATH_IMAGE030
Inputting into a designed neural network algorithm for interference elimination, wherein the neural network algorithm is based on a multilayer structuretThe calculation process of the layer is as follows:
Figure 351575DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 241033DEST_PATH_IMAGE034
Figure 916865DEST_PATH_IMAGE035
is a matrix
Figure 482845DEST_PATH_IMAGE036
The conjugate transpose of (a) is performed,tis an integer greater than zero, and the maximum number of layers is set to
Figure 611338DEST_PATH_IMAGE037
I.e. by
Figure 304487DEST_PATH_IMAGE038
Figure 585558DEST_PATH_IMAGE039
Figure 73171DEST_PATH_IMAGE040
Representing the action of a noise remover onnThe column signals are then transmitted to the display device,
Figure 688960DEST_PATH_IMAGE041
representing the first derivative of the denoiser function; the design of the denoiser will be implemented by a deep neural network,
Figure 185801DEST_PATH_IMAGE042
representative denoiser
Figure 819913DEST_PATH_IMAGE043
A neural network parameter of (a);
noise removing device
Figure 478428DEST_PATH_IMAGE044
The design of (2) is as follows: firstly, a complex matrix is formed
Figure 581513DEST_PATH_IMAGE045
Conversion into a real number matrix
Figure 98689DEST_PATH_IMAGE046
Wherein
Figure 603619DEST_PATH_IMAGE047
Representative dimension of
Figure 698614DEST_PATH_IMAGE048
The conversion mode of the real number matrix set is as follows:
Figure 538263DEST_PATH_IMAGE049
wherein
Figure 111327DEST_PATH_IMAGE050
Wherein
Figure 470764DEST_PATH_IMAGE051
Representative dimension of
Figure 221814DEST_PATH_IMAGE052
The set of real vectors of (a) is,is a matrix
Figure 299491DEST_PATH_IMAGE053
To (1) anA section matrix; the matrix is then input into the following neural network:
Figure 410666DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 873878DEST_PATH_IMAGE055
represents a combination of two neural networks;
Figure 45096DEST_PATH_IMAGE056
is a convolutional neural network with a number of filters of
Figure 610070DEST_PATH_IMAGE057
The kernel size is (1,1), and the step size is (1, 1);
in a convolutional network
Figure 538318DEST_PATH_IMAGE058
And
Figure 606768DEST_PATH_IMAGE059
adding Relu function as an activation function at the end of (1); order to
Figure 948888DEST_PATH_IMAGE060
Figure 250425DEST_PATH_IMAGE061
Is a soft shrinkage function:
Figure 234561DEST_PATH_IMAGE062
wherein, the matrix
Figure 157518DEST_PATH_IMAGE063
Is a matrix
Figure 686851DEST_PATH_IMAGE064
To (1) anThe number of the slices is one,
Figure 960837DEST_PATH_IMAGE065
is that the puncturing parameter is included in the parameter set
Figure 483085DEST_PATH_IMAGE066
Performing the following steps; finally, will
Figure 775395DEST_PATH_IMAGE067
Conversion into a complex matrix
Figure 459318DEST_PATH_IMAGE068
(ii) a Output signal
Figure 486179DEST_PATH_IMAGE069
Let us order
Figure 546539DEST_PATH_IMAGE070
S203, calculating by using a neural network
Figure 191891DEST_PATH_IMAGE071
The posterior probability of (2):
first, each complex phasor
Figure 577873DEST_PATH_IMAGE072
Conversion into real number vector
Figure 560872DEST_PATH_IMAGE073
That is to say that,
Figure 674191DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 691825DEST_PATH_IMAGE075
representative vector
Figure 983129DEST_PATH_IMAGE076
To middle
Figure 204157DEST_PATH_IMAGE077
Element to element
Figure 871899DEST_PATH_IMAGE078
A vector of the composition of the individual elements,
Figure 744040DEST_PATH_IMAGE079
and
Figure 189934DEST_PATH_IMAGE080
respectively representing real numbers and imaginary numbers; then, the obtained vector is
Figure 413105DEST_PATH_IMAGE081
The input neural network, i.e.,
Figure 353379DEST_PATH_IMAGE082
wherein the content of the first and second substances,
Figure 827829DEST_PATH_IMAGE083
and
Figure 726515DEST_PATH_IMAGE084
is a fully connected neural network layer, the number of neurons is respectively
Figure 171403DEST_PATH_IMAGE085
And
Figure 164636DEST_PATH_IMAGE086
(ii) a The Relu function and the Softmax function are respectively added to the network
Figure 11369DEST_PATH_IMAGE083
And
Figure 815377DEST_PATH_IMAGE084
at the end of the time period (c) of (c),
Figure 498293DEST_PATH_IMAGE087
is a parameter of the neural network;
finally, based on the obtained output
Figure 45949DEST_PATH_IMAGE088
The optimal a posteriori probability for detection is calculated, i.e.,
Figure 747189DEST_PATH_IMAGE089
wherein
Figure 971366DEST_PATH_IMAGE090
Is to transmit information
Figure 125267DEST_PATH_IMAGE091
The thermally encoded codeword of (a);
if it is
Figure 211034DEST_PATH_IMAGE092
Then, then
Figure 538021DEST_PATH_IMAGE093
When is coming into contact with
Figure 683831DEST_PATH_IMAGE094
Then, then
Figure 325028DEST_PATH_IMAGE095
Wherein
Figure 198175DEST_PATH_IMAGE096
Representative length ofnA zero vector of (d);
s204, after the posterior probability is obtained, the user emission information is detected by a method of maximizing the posterior probability, namely,
Figure 874007DEST_PATH_IMAGE097
to obtain
Figure 675872DEST_PATH_IMAGE098
Then, the transmission information is obtained through the corresponding relationship of the thermal coding in step S203
Figure 804365DEST_PATH_IMAGE099
S205, passing the detected information
Figure 215624DEST_PATH_IMAGE099
Thus, whether the user is successfully accessed is judged: when in use
Figure 949225DEST_PATH_IMAGE100
Then represents the usernAnd successfully accessing the receiving end.
Further, the step S3 includes the following sub-steps:
the step S3 includes the following sub-steps:
s301, initializing and inputting
Figure 184641DEST_PATH_IMAGE101
Parameter of
Figure 66009DEST_PATH_IMAGE102
And
Figure 297270DEST_PATH_IMAGE103
training sample
Figure 947694DEST_PATH_IMAGE104
Wherein, in the step (A),
Figure 589897DEST_PATH_IMAGE105
is as followsjThe received signal at the time of one sample,
Figure 958561DEST_PATH_IMAGE106
represents the firstjUnder the samplenThe transmitted code words of the individual users are,Bis the total number of samples, positive real number
Figure 727934DEST_PATH_IMAGE107
S302, sampling
Figure 249177DEST_PATH_IMAGE105
The input enters the neural network in S202,
Figure 813013DEST_PATH_IMAGE108
representing the output of a neural networknA line real number signal; then will be
Figure 403395DEST_PATH_IMAGE109
Neural network in input S203, output
Figure 491305DEST_PATH_IMAGE110
S303. utilize
Figure 585163DEST_PATH_IMAGE111
Figure 851059DEST_PATH_IMAGE112
And thermally encoded code word
Figure 410960DEST_PATH_IMAGE113
To neural network parameters
Figure 787715DEST_PATH_IMAGE114
And
Figure 267238DEST_PATH_IMAGE115
updating is carried out;
firstly, designing a loss function for training a neural network, wherein the loss function comprises three aspects:
Figure 953303DEST_PATH_IMAGE116
Figure 252697DEST_PATH_IMAGE117
Figure 167564DEST_PATH_IMAGE118
wherein the content of the first and second substances,
Figure 252325DEST_PATH_IMAGE119
representative vector
Figure 860024DEST_PATH_IMAGE120
To (1) aiThe number of the elements is one,
Figure 646715DEST_PATH_IMAGE121
is a transmitted codeword obtained by randomly scrambling a training sample; equation of
Figure 348960DEST_PATH_IMAGE122
Is given by the parameter
Figure 803075DEST_PATH_IMAGE123
The design method of the neural network comprises the following steps:
Figure 316096DEST_PATH_IMAGE124
wherein the content of the first and second substances,
Figure 855662DEST_PATH_IMAGE125
is a fully connected neural network with a number of nodes of
Figure 125713DEST_PATH_IMAGE126
(ii) a Is provided with
Figure 168755DEST_PATH_IMAGE127
(ii) a An ELU function is arranged behind each neural network as an activation function;
for each training, input samples
Figure 852678DEST_PATH_IMAGE128
Enter a neural network to obtain
Figure 597649DEST_PATH_IMAGE129
And
Figure 658009DEST_PATH_IMAGE130
then calculating a loss function, and then using a backward iterative algorithm and an Ada optimizer to pair the parameters
Figure 821137DEST_PATH_IMAGE131
Updating; when updated a fixed number of times, the output is the updated neural network parameters, i.e.
Figure 692272DEST_PATH_IMAGE132
S304, the updated neural network parameters are used in the algorithm of step S2
Figure 940851DEST_PATH_IMAGE133
(ii) a The updated neural network can obtain more accurate transmission information
Figure 804901DEST_PATH_IMAGE134
And the random access is more accurate.
The invention has the beneficial effects that: in the large-scale random access scheme provided by the invention, a decoding algorithm with low complexity is provided, and the communication performance is effectively improved. Specifically, compared with the traditional algorithm, the detection algorithm based on the neural network does not need the prior statistical characteristic of a channel, can greatly reduce the loss of the system, and is more suitable for the actual communication system. In addition, the proposed algorithm will be more robust than the conventional algorithm, i.e. it will provide better performance, such as lower error rates, when the system a priori knowledge is not complete.
Drawings
Fig. 1 is a schematic diagram of a large scale random access channel;
FIG. 2 is a flow chart of a method of the present invention;
FIG. 3 is a schematic diagram of a neural network algorithm based on a multilayer structure;
FIG. 4 is a schematic diagram of the design principle of a de-noiser;
FIG. 5 is a schematic diagram showing a comparison of algorithms when the number of users is (4,8,28) and the sparsity is (0.2,0.1,0.2) in the embodiment;
FIG. 6 is a schematic diagram showing a comparison of the algorithm with the number of users being (8,20,12) and the sparsity being (0.1,0.2,0.3) in the embodiment;
fig. 7 is a schematic diagram showing comparison of algorithms in the embodiment in which the number of users is (4,8,28) and the sparsity is (0.1,0.2, 0.3).
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, aiming at the problem of large-scale random access in 5G communication, the invention designs a random access algorithm based on a deep learning network. Considering a large-scale random access channel as shown in fig. 1, a base station needs to support uplink transmission of a large number of users at the same time. At one transmission moment, only a few users are in an active state to transmit information to the base station, and other users are in a dormant state. As shown in fig. 2, a specific method includes the following steps:
s1, constructing a system model based on large-scale random access:
s101. for
Figure 806224DEST_PATH_IMAGE135
Communication system comprising a single antenna subscriber and a receiver, each subscriber being randomly connected to the receiver, i.e. transmitting information to the receiver with a certain probability in each transmission time slot, wherein the receiver is provided with
Figure 363108DEST_PATH_IMAGE136
A root antenna; by random variables
Figure 98982DEST_PATH_IMAGE137
To describe the usernThe active nature of the slot, at each time slot,
Figure 248948DEST_PATH_IMAGE137
satisfies the following conditions:
Figure 121089DEST_PATH_IMAGE138
s102, each user adopts a random access scheme based on free access; each user is pre-assigned a dedicated pilot sequence prior to transmission
Figure 583294DEST_PATH_IMAGE139
Wherein
Figure 790153DEST_PATH_IMAGE140
For pilot length, symbols
Figure 996007DEST_PATH_IMAGE141
Representative length of
Figure 722654DEST_PATH_IMAGE140
A set of complex sequences of (a); the elements of each pilot being derived from an independent identically distributed gaussian distribution, i.e.
Figure 621340DEST_PATH_IMAGE142
Wherein the symbol
Figure 551381DEST_PATH_IMAGE143
Represents a mean of 0 and a variance of
Figure 295346DEST_PATH_IMAGE144
The complex gaussian distribution of (a) is,
Figure 407659DEST_PATH_IMAGE145
representative dimension of
Figure 195355DEST_PATH_IMAGE140
The identity matrix of (1); storing the pilot sequences of all users in a receiving end;
s103, each active user synchronously transmits pilot frequency sequence and information in each transmission time slot
Figure 127539DEST_PATH_IMAGE146
To the receiving end, the received signal is represented as
Figure 675195DEST_PATH_IMAGE147
Order to
Figure 124237DEST_PATH_IMAGE148
Obtaining a matrix expression of the received signal,
Figure 99147DEST_PATH_IMAGE149
wherein
Figure 518627DEST_PATH_IMAGE150
Is Gaussian noise, each element satisfies the conditions that the mean value of independent equal distribution is zero and the variance is
Figure 119241DEST_PATH_IMAGE151
(ii) a gaussian distribution of;
Figure 674987DEST_PATH_IMAGE152
representing a usernThe channel parameters to the receiving end are,
Figure 86377DEST_PATH_IMAGE153
to indicate a length ofMAnd is unknown at the receiving end, is set
Figure 478307DEST_PATH_IMAGE154
For the usernThe transmission signal of (1); in which the signal is transmitted
Figure 102186DEST_PATH_IMAGE154
Is generated from the following codebook:
Figure 43597DEST_PATH_IMAGE155
wherein
Figure 343997DEST_PATH_IMAGE156
Is the first
Figure 738070DEST_PATH_IMAGE157
A number of modulation code words is modulated,
Figure 165640DEST_PATH_IMAGE158
is a usernThe rate of transmission of (a) is,
Figure 709360DEST_PATH_IMAGE159
representing the user as inactive, i.e. inactive
Figure 196973DEST_PATH_IMAGE160
S2, constructing a transmitting signal for a user by utilizing a deep neural network
Figure 78342DEST_PATH_IMAGE161
A model for detection and user access judgment;
the step S2 includes:
s201, initialization: inputting a received signal
Figure 293291DEST_PATH_IMAGE162
Sparse parameters of usersgRate per user
Figure 943716DEST_PATH_IMAGE163
. Initialization order
Figure 602230DEST_PATH_IMAGE164
S202, firstly, receiving signals
Figure 970894DEST_PATH_IMAGE162
Inputting into a designed neural network algorithm for interference elimination, wherein the neural network algorithm is based on a multilayer structuretThe calculation process of the layer is as follows:
Figure 491000DEST_PATH_IMAGE165
wherein the content of the first and second substances,
Figure 995930DEST_PATH_IMAGE166
Figure 825346DEST_PATH_IMAGE167
is a matrix
Figure 930574DEST_PATH_IMAGE168
The conjugate transpose of (a) is performed,tis an integer greater than zero, and the maximum number of layers is set to
Figure 238059DEST_PATH_IMAGE169
I.e. by
Figure 863075DEST_PATH_IMAGE170
Figure 611195DEST_PATH_IMAGE171
Figure 688872DEST_PATH_IMAGE172
Representing the action of a noise remover onnThe column signals are then transmitted to the display device,
Figure 65627DEST_PATH_IMAGE173
representing the first derivative of the denoiser function; the design of the denoiser will be implemented by a deep neural network,
Figure 528838DEST_PATH_IMAGE174
representative denoiser
Figure 965636DEST_PATH_IMAGE175
A neural network parameter of (a);
noise removing device
Figure 530609DEST_PATH_IMAGE176
The design of (2) is as follows: firstly, a complex matrix is formed
Figure 930629DEST_PATH_IMAGE177
Conversion into a real number matrix
Figure 530238DEST_PATH_IMAGE178
Wherein
Figure 872357DEST_PATH_IMAGE179
Representative dimension of
Figure 908315DEST_PATH_IMAGE180
The conversion mode of the real number matrix set is as follows:
Figure 626873DEST_PATH_IMAGE181
wherein
Figure 815408DEST_PATH_IMAGE182
Wherein
Figure 99670DEST_PATH_IMAGE183
Representative dimension ofMIs a matrix
Figure 639235DEST_PATH_IMAGE184
To (1) anA section matrix; the matrix is then input into the following neural network:
Figure 895904DEST_PATH_IMAGE185
wherein the content of the first and second substances,
Figure 204526DEST_PATH_IMAGE186
represents a combination of two neural networks;
Figure 137716DEST_PATH_IMAGE187
is a convolutional neural network with a number of filters of
Figure 898998DEST_PATH_IMAGE188
The kernel size is(1,1), the step size is (1, 1);
in a convolutional network
Figure 959358DEST_PATH_IMAGE189
And
Figure 607639DEST_PATH_IMAGE190
adding Relu function as an activation function at the end of (1); order to
Figure 728042DEST_PATH_IMAGE191
Figure 976621DEST_PATH_IMAGE192
Is a soft shrinkage function:
Figure 89939DEST_PATH_IMAGE193
wherein the content of the first and second substances,
Figure 841995DEST_PATH_IMAGE194
the matrix being a matrix
Figure 133299DEST_PATH_IMAGE195
To (1) anThe number of the slices is one,
Figure 351397DEST_PATH_IMAGE196
is that the puncturing parameter is included in the parameter set
Figure 487980DEST_PATH_IMAGE197
Performing the following steps; finally, will
Figure 609389DEST_PATH_IMAGE198
Conversion into a complex matrix
Figure 71594DEST_PATH_IMAGE199
After the step of S202, the signal is outputted
Figure 763607DEST_PATH_IMAGE200
Let us order
Figure 720193DEST_PATH_IMAGE201
S203. in this step, we will use neural network to calculate the basis
Figure 446840DEST_PATH_IMAGE202
The posterior probability of (d). First, each complex phasor
Figure 814368DEST_PATH_IMAGE203
Conversion into real number vector
Figure 508523DEST_PATH_IMAGE204
That is to say that,
Figure 986909DEST_PATH_IMAGE205
wherein
Figure 581445DEST_PATH_IMAGE206
Representative vector
Figure 385453DEST_PATH_IMAGE207
To middle
Figure 317637DEST_PATH_IMAGE208
Element to element
Figure 114560DEST_PATH_IMAGE209
A vector of the composition of the individual elements,
Figure 81379DEST_PATH_IMAGE210
and
Figure 790709DEST_PATH_IMAGE211
representing real and imaginary numbers, respectively. Then, we will get the vector
Figure 960922DEST_PATH_IMAGE212
The input neural network, i.e.,
Figure 46690DEST_PATH_IMAGE213
wherein
Figure 868015DEST_PATH_IMAGE214
And
Figure 997514DEST_PATH_IMAGE215
is a fully connected neural network layer, the number of neurons is respectively
Figure 904290DEST_PATH_IMAGE216
And
Figure 793749DEST_PATH_IMAGE217
. The Relu function and the Softmax function are respectively added to the network
Figure 469581DEST_PATH_IMAGE214
And
Figure 534095DEST_PATH_IMAGE215
at the end of the time period (c) of (c),
Figure 928167DEST_PATH_IMAGE218
are parameters of the neural network. Finally, based on the obtained output
Figure 355738DEST_PATH_IMAGE219
We can calculate the optimal a posteriori probability for detection, i.e.,
Figure 135344DEST_PATH_IMAGE220
wherein
Figure 622957DEST_PATH_IMAGE221
Is to transmit information
Figure 238746DEST_PATH_IMAGE222
The thermally encoded codeword of (a); if it is
Figure 220740DEST_PATH_IMAGE223
Then, then
Figure 871164DEST_PATH_IMAGE224
When is coming into contact with
Figure 529678DEST_PATH_IMAGE225
Then, then
Figure 882031DEST_PATH_IMAGE226
Wherein
Figure 916983DEST_PATH_IMAGE227
Representative length ofnThe zero vector of (2).
S204, after the posterior probability is obtained, the user emission information is detected by a method of maximizing the posterior probability, namely,
Figure 156335DEST_PATH_IMAGE228
to obtain
Figure 999132DEST_PATH_IMAGE229
Then, through the correspondence relationship of the thermal coding in S203, we can obtain the transmission information
Figure 323934DEST_PATH_IMAGE230
S205, passing the detected information
Figure 162577DEST_PATH_IMAGE230
Thus, whether the user is successfully accessed is judged: when in use
Figure 240124DEST_PATH_IMAGE231
Then represents the usernAnd successfully accessing the receiving end.
Step S2 describes the specific steps of the neural network algorithm, however, the parameters of the neural network need to be trained before they can be used. To this end, we describe in detail how to train the neural network and update the parameters in S3.
The step S3 includes the following sub-steps:
s301, initializing and inputting
Figure 240441DEST_PATH_IMAGE232
Parameter of
Figure 803271DEST_PATH_IMAGE233
And
Figure 914447DEST_PATH_IMAGE234
training sample
Figure 393970DEST_PATH_IMAGE235
Wherein, in the step (A),
Figure 814456DEST_PATH_IMAGE236
is as followsjThe received signal at the time of one sample,
Figure 379429DEST_PATH_IMAGE237
represents the firstjUnder the samplenThe transmitted code words of the individual users are,Bis the total number of samples, positive real number
Figure 294296DEST_PATH_IMAGE238
S302, sampling
Figure 641707DEST_PATH_IMAGE236
The input enters the neural network in S202,
Figure 983827DEST_PATH_IMAGE239
representing the output of a neural networknA line real number signal; then will be
Figure 504938DEST_PATH_IMAGE239
Neural network in input S203, output
Figure 738342DEST_PATH_IMAGE240
S303. utilize
Figure 192457DEST_PATH_IMAGE239
Figure 705478DEST_PATH_IMAGE240
And thermally encoded code word
Figure 464618DEST_PATH_IMAGE241
To neural network parameters
Figure 721287DEST_PATH_IMAGE242
And
Figure 29908DEST_PATH_IMAGE234
updating is carried out;
firstly, designing a loss function for training a neural network, wherein the loss function comprises three aspects:
Figure 963098DEST_PATH_IMAGE243
Figure 724381DEST_PATH_IMAGE244
Figure 784740DEST_PATH_IMAGE245
wherein the content of the first and second substances,
Figure 695671DEST_PATH_IMAGE246
representative vector
Figure 550495DEST_PATH_IMAGE247
To (1) aiThe number of the elements is one,
Figure 799074DEST_PATH_IMAGE248
is a transmitted codeword obtained by randomly scrambling a training sample; equation of
Figure 646813DEST_PATH_IMAGE249
Is given by the parameter
Figure 930026DEST_PATH_IMAGE250
The design method of the neural network comprises the following steps:
Figure 955751DEST_PATH_IMAGE251
wherein the content of the first and second substances,
Figure 691626DEST_PATH_IMAGE252
is a fully connected neural network with a number of nodes of
Figure 844521DEST_PATH_IMAGE253
(ii) a Is provided with
Figure 982241DEST_PATH_IMAGE254
(ii) a An ELU function is arranged behind each neural network as an activation function;
for each training, input samples
Figure 178867DEST_PATH_IMAGE255
Enter a neural network to obtain
Figure 385727DEST_PATH_IMAGE256
And
Figure 857159DEST_PATH_IMAGE257
then calculating a loss function, and then using a backward iterative algorithm and an Ada optimizer to pair the parameters
Figure 583807DEST_PATH_IMAGE258
Updating; when updated a fixed number of times, the output is the updated neural network parameters, i.e.
Figure 699137DEST_PATH_IMAGE259
S304, in step S2 algorithmUsing updated neural network parameters
Figure 409604DEST_PATH_IMAGE260
(ii) a The updated neural network can obtain more accurate transmission information
Figure 153569DEST_PATH_IMAGE261
And the random access is more accurate.
S4, detecting the user emission signal according to the neural network after training update, thereby judging whether the user is successfully accessed; when the access is determined to be successful, the steps in steps S201 to S205 are followed.
In the embodiments of the present application, some simulation results are given to verify the feasibility of the proposed random access scheme. The experimental parameters were selected as: number of usersN=40, sequence lengthK= 30. Three different transmission rates are considered: the codebook of users in group 1 is
Figure DEST_PATH_IMAGE262
The codebook of users in group 2 is
Figure 452832DEST_PATH_IMAGE263
The codebook of users in group 3 is
Figure DEST_PATH_IMAGE264
. The channels satisfying a Rice distribution, i.e.
Figure 210835DEST_PATH_IMAGE265
. Channel parameters for each user
Figure DEST_PATH_IMAGE266
Are all composed ofK-the factor rice distribution is randomly generated. We compare the proposed algorithm with the traditional message-based algorithm and set the parameters
Figure 346281DEST_PATH_IMAGE267
To measure the estimation error for the channel profile, i.e.
Figure DEST_PATH_IMAGE268
. The parameters of the neural network are designed as:
Figure 612046DEST_PATH_IMAGE269
Figure DEST_PATH_IMAGE270
. Number of training samples is
Figure 287789DEST_PATH_IMAGE271
In the experiment of fig. 5, we set the number of users in user groups 1, 2, and 3 to be (4,8, and 28), respectively, and the sparsity to be (0.2,0.1, and 0.2), respectively. As shown in fig. 5, our proposed algorithm is more robust than the conventional message passing algorithm. The neural network algorithm we propose has better performance when there is error to the channel profile estimate. In fig. 6, we change the number of users in the user group to (8,20,12), and the sparsity to (0.1,0.2, 0.3). As shown in fig. 6, the performance of our proposed algorithm still has better performance in robustness than the message passing algorithm.
In fig. 7, we investigated the effect of the number of antennas on performance. We set the number of users in user groups 1, 2, 3 to be (12,20,8), and the sparsity to be (0.1,0.2, 0.1). As shown in fig. 7, as the number of antennas increases, the error rate of the proposed algorithm decreases and is more robust than the conventional message passing algorithm.
The foregoing is a preferred embodiment of the present invention, it is to be understood that the invention is not limited to the form disclosed herein, but is not to be construed as excluding other embodiments, and is capable of other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A large-scale random access method based on a deep learning network is characterized in that: the method comprises the following steps:
s1, constructing a system model based on large-scale random access;
s2, constructing a transmitting signal for a user by utilizing a deep neural network
Figure DEST_PATH_IMAGE001
A model for detection and user access judgment;
s3, carrying out neural network training and parameter updating;
and S4, detecting the signal emitted by the user according to the neural network after the training and updating, thereby judging whether the user is successfully accessed.
2. The large-scale random access method based on the deep learning network as claimed in claim 1, wherein: the step S1 includes the following sub-steps:
s101, for the content containing
Figure 923641DEST_PATH_IMAGE002
Communication system comprising a single antenna subscriber and a receiver, each subscriber being randomly connected to the receiver, i.e. transmitting information to the receiver with a certain probability in each transmission time slot, wherein the receiver is provided with
Figure DEST_PATH_IMAGE003
A root antenna; by random variables
Figure 903099DEST_PATH_IMAGE004
To describe the user
Figure DEST_PATH_IMAGE005
The active nature of the slot, at each time slot,
Figure 13662DEST_PATH_IMAGE004
satisfies the following conditions:
Figure 940029DEST_PATH_IMAGE006
s102, each user adopts a random access scheme based on free access; each user is pre-assigned a dedicated pilot sequence prior to transmission
Figure DEST_PATH_IMAGE007
Wherein
Figure 82298DEST_PATH_IMAGE008
For pilot length, symbols
Figure 435919DEST_PATH_IMAGE009
Representative length of
Figure 234110DEST_PATH_IMAGE008
A set of complex sequences of (a); the elements of each pilot being derived from an independent identically distributed gaussian distribution, i.e.
Figure 964169DEST_PATH_IMAGE010
Wherein the symbol
Figure DEST_PATH_IMAGE011
Represents a mean of 0 and a variance of
Figure 960944DEST_PATH_IMAGE012
The complex gaussian distribution of (a) is,
Figure DEST_PATH_IMAGE013
representative dimension of
Figure 282204DEST_PATH_IMAGE014
The identity matrix of (1); storing the pilot sequences of all users in a receiving end;
s103, each active user synchronously transmits pilot frequency sequence and information in each transmission time slot
Figure DEST_PATH_IMAGE015
To the receiving end, the received signal is represented as
Figure 364429DEST_PATH_IMAGE016
Order to
Figure DEST_PATH_IMAGE017
Obtaining a matrix expression of the received signal,
Figure 691987DEST_PATH_IMAGE018
wherein
Figure DEST_PATH_IMAGE019
Is Gaussian noise, each element satisfies the conditions that the mean value of independent equal distribution is zero and the variance is
Figure 808847DEST_PATH_IMAGE020
(ii) a gaussian distribution of;
Figure 35429DEST_PATH_IMAGE021
representing a usernThe channel parameters to the receiving end are,
Figure 808213DEST_PATH_IMAGE022
to indicate a length of
Figure DEST_PATH_IMAGE023
And is unknown at the receiving end, is set
Figure 676812DEST_PATH_IMAGE024
For the usernThe transmission signal of (1); in which the signal is transmitted
Figure 117021DEST_PATH_IMAGE024
Is generated from the following codebook:
Figure DEST_PATH_IMAGE025
wherein
Figure 780083DEST_PATH_IMAGE026
Is the first
Figure DEST_PATH_IMAGE027
A number of modulation code words is modulated,
Figure 574251DEST_PATH_IMAGE028
is a usernThe rate of transmission of (a) is,
Figure DEST_PATH_IMAGE029
representing the user as inactive, i.e. inactive
Figure 980962DEST_PATH_IMAGE030
3. The large-scale random access method based on the deep learning network as claimed in claim 2, wherein: the step S2 includes the following sub-steps:
s201, initialization: inputting a received signal
Figure DEST_PATH_IMAGE031
Sparse parameters of usersgRate per user
Figure 541256DEST_PATH_IMAGE032
(ii) a Initialization order
Figure DEST_PATH_IMAGE033
S202, firstly, receiving signals
Figure 640799DEST_PATH_IMAGE031
Inputting into a designed neural network algorithm for interference elimination, wherein the neural network algorithm is based on a multilayer structuretThe calculation process of the layer is as follows:
Figure DEST_PATH_IMAGE035
wherein the content of the first and second substances,
Figure 653755DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE037
is a matrix
Figure 864156DEST_PATH_IMAGE038
The conjugate transpose of (a) is performed,tis an integer greater than zero, and the maximum number of layers is set to
Figure DEST_PATH_IMAGE039
I.e. by
Figure 278957DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE041
Figure 569909DEST_PATH_IMAGE042
Representing the action of a noise remover onnThe column signals are then transmitted to the display device,
Figure DEST_PATH_IMAGE043
representing the first derivative of the denoiser function; the design of the denoiser will be implemented by a deep neural network,
Figure 70161DEST_PATH_IMAGE044
representative denoiser
Figure 21936DEST_PATH_IMAGE045
A neural network parameter of (a);
noise removing device
Figure 291243DEST_PATH_IMAGE045
The design of (2) is as follows: firstly, a complex matrix is formed
Figure 935851DEST_PATH_IMAGE046
Conversion into a real number matrix
Figure DEST_PATH_IMAGE047
Wherein
Figure 923399DEST_PATH_IMAGE048
Representative dimension of
Figure DEST_PATH_IMAGE049
The conversion mode of the real number matrix set is as follows:
Figure 475603DEST_PATH_IMAGE050
wherein
Figure DEST_PATH_IMAGE051
Wherein
Figure 599417DEST_PATH_IMAGE052
Representative dimension of
Figure DEST_PATH_IMAGE053
Is a matrix
Figure 211664DEST_PATH_IMAGE054
To (1) anA section matrix; the matrix is then input into the following neural network:
Figure DEST_PATH_IMAGE055
wherein the content of the first and second substances,
Figure 689437DEST_PATH_IMAGE056
represents a combination of two neural networks;
Figure DEST_PATH_IMAGE057
is a convolutional neural network with a number of filters of
Figure 779753DEST_PATH_IMAGE058
The kernel size is (1,1), and the step size is (1, 1);
in a convolutional network
Figure DEST_PATH_IMAGE059
And
Figure 289232DEST_PATH_IMAGE060
adding Relu function as an activation function at the end of (1); order to
Figure DEST_PATH_IMAGE061
Figure 541221DEST_PATH_IMAGE062
Is a soft shrinkage function:
Figure DEST_PATH_IMAGE063
wherein, the matrix
Figure 768941DEST_PATH_IMAGE064
Is a matrix
Figure DEST_PATH_IMAGE065
To (1) anThe number of the slices is one,
Figure 397368DEST_PATH_IMAGE066
is that the puncturing parameter is included in the parameter set
Figure DEST_PATH_IMAGE067
Performing the following steps; finally, will
Figure 761353DEST_PATH_IMAGE068
Conversion into a complex matrix
Figure DEST_PATH_IMAGE069
(ii) a Output signal
Figure 446894DEST_PATH_IMAGE070
Let us order
Figure DEST_PATH_IMAGE071
S203, calculating by using a neural network
Figure 161909DEST_PATH_IMAGE072
The posterior probability of (2):
first, each complex phasor
Figure DEST_PATH_IMAGE073
Conversion into real number vector
Figure 594027DEST_PATH_IMAGE074
That is to say that,
Figure DEST_PATH_IMAGE075
wherein the content of the first and second substances,
Figure 546940DEST_PATH_IMAGE076
representative vector
Figure DEST_PATH_IMAGE077
To middle
Figure 671891DEST_PATH_IMAGE078
Element to element
Figure DEST_PATH_IMAGE079
A vector of the composition of the individual elements,
Figure 343043DEST_PATH_IMAGE080
and
Figure DEST_PATH_IMAGE081
respectively representing real numbers and imaginary numbers; then, the obtained vector is
Figure 844432DEST_PATH_IMAGE082
The input neural network, i.e.,
Figure DEST_PATH_IMAGE083
wherein the content of the first and second substances,
Figure 920360DEST_PATH_IMAGE084
and
Figure DEST_PATH_IMAGE085
is a fully connected neural network layer, the number of neurons is respectively
Figure 950633DEST_PATH_IMAGE086
And
Figure DEST_PATH_IMAGE087
(ii) a The Relu function and the Softmax function are respectively added to the network
Figure 374661DEST_PATH_IMAGE084
And
Figure 617423DEST_PATH_IMAGE085
at the end of the time period (c) of (c),
Figure 13770DEST_PATH_IMAGE088
is a parameter of the neural network;
finally, based on the obtained output
Figure DEST_PATH_IMAGE089
The optimal a posteriori probability for detection is calculated, i.e.,
Figure 480523DEST_PATH_IMAGE090
wherein
Figure DEST_PATH_IMAGE091
Is to transmit information
Figure 391847DEST_PATH_IMAGE092
The thermally encoded codeword of (a);
if it is
Figure DEST_PATH_IMAGE093
Then, then
Figure 969459DEST_PATH_IMAGE094
When is coming into contact with
Figure DEST_PATH_IMAGE095
Then, then
Figure 279699DEST_PATH_IMAGE096
Wherein
Figure DEST_PATH_IMAGE097
Representative length ofnA zero vector of (d);
s204, after the posterior probability is obtained, the user emission information is detected by a method of maximizing the posterior probability, namely,
Figure 917354DEST_PATH_IMAGE098
to obtain
Figure DEST_PATH_IMAGE099
Then, the transmission information is obtained through the corresponding relationship of the thermal coding in step S203
Figure 50395DEST_PATH_IMAGE100
S205, passing the detected information
Figure 900539DEST_PATH_IMAGE100
Thus, whether the user is successfully accessed is judged: when in use
Figure DEST_PATH_IMAGE101
Then represents the usernAnd successfully accessing the receiving end.
4. The large-scale random access method based on the deep learning network as claimed in claim 1, wherein: the step S3 includes the following sub-steps:
s301, initializing and inputting
Figure 68215DEST_PATH_IMAGE102
Parameter of
Figure DEST_PATH_IMAGE103
And
Figure 876771DEST_PATH_IMAGE104
training sample
Figure DEST_PATH_IMAGE105
Wherein, in the step (A),
Figure 231529DEST_PATH_IMAGE106
is as followsjThe received signal at the time of one sample,
Figure DEST_PATH_IMAGE107
represents the firstjUnder the samplenThe transmitted code words of the individual users are,Bis the total number of samples, positive real number
Figure 150944DEST_PATH_IMAGE108
S302, sampling
Figure 644898DEST_PATH_IMAGE106
The input enters the neural network in S202,
Figure DEST_PATH_IMAGE109
representing the output of a neural networknA line real number signal; then will be
Figure 358776DEST_PATH_IMAGE109
Neural network in input S203, output
Figure 935251DEST_PATH_IMAGE110
S303. utilize
Figure DEST_PATH_IMAGE111
Figure 658356DEST_PATH_IMAGE112
And thermally encoded code word
Figure DEST_PATH_IMAGE113
To neural network parameters
Figure 269466DEST_PATH_IMAGE114
And
Figure DEST_PATH_IMAGE115
updating is carried out;
firstly, designing a loss function for training a neural network, wherein the loss function comprises three aspects:
Figure 685404DEST_PATH_IMAGE116
Figure DEST_PATH_IMAGE117
Figure 14754DEST_PATH_IMAGE118
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE119
representative vector
Figure DEST_PATH_IMAGE120
To (1) aiThe number of the elements is one,
Figure DEST_PATH_IMAGE121
is a transmitted codeword obtained by randomly scrambling a training sample; equation of
Figure DEST_PATH_IMAGE122
Is given by the parameter
Figure DEST_PATH_IMAGE123
The design method of the neural network comprises the following steps:
Figure DEST_PATH_IMAGE124
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE125
is a fully connected neural network with a number of nodes of
Figure DEST_PATH_IMAGE126
(ii) a Is provided with
Figure DEST_PATH_IMAGE127
(ii) a An ELU function is arranged behind each neural network as an activation function;
for each training, input samples
Figure DEST_PATH_IMAGE128
Enter a neural network to obtain
Figure DEST_PATH_IMAGE129
And
Figure DEST_PATH_IMAGE130
then calculating a loss function, and then using a backward iterative algorithm and an Ada optimizer to pair the parameters
Figure DEST_PATH_IMAGE131
Updating; when updated a fixed number of times, the output is the updated neural network parameters, i.e.
Figure DEST_PATH_IMAGE132
S304, the updated neural network parameters are used in the algorithm of step S2
Figure DEST_PATH_IMAGE133
(ii) a The updated neural network can obtain more accurate transmission information
Figure DEST_PATH_IMAGE134
And the random access is more accurate.
CN202111323583.8A 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network Active CN113766669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111323583.8A CN113766669B (en) 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111323583.8A CN113766669B (en) 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network

Publications (2)

Publication Number Publication Date
CN113766669A true CN113766669A (en) 2021-12-07
CN113766669B CN113766669B (en) 2021-12-31

Family

ID=78784916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111323583.8A Active CN113766669B (en) 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network

Country Status (1)

Country Link
CN (1) CN113766669B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107743103A (en) * 2017-10-26 2018-02-27 北京交通大学 The multinode access detection of MMTC systems based on deep learning and channel estimation methods
CN107820321A (en) * 2017-10-31 2018-03-20 北京邮电大学 Large-scale consumer intelligence Access Algorithm in a kind of arrowband Internet of Things based on cellular network
CN108882301A (en) * 2018-07-25 2018-11-23 西安交通大学 The nonopiate accidental access method kept out of the way in extensive M2M network based on optimal power
CN109862567A (en) * 2019-03-28 2019-06-07 电子科技大学 A kind of method of cell mobile communication systems access unlicensed spectrum
US20200026247A1 (en) * 2018-07-19 2020-01-23 International Business Machines Corporation Continuous control of attention for a deep learning network
CN111182649A (en) * 2020-01-03 2020-05-19 浙江工业大学 Random access method based on large-scale MIMO
CN111224905A (en) * 2019-12-25 2020-06-02 西安交通大学 Multi-user detection method based on convolution residual error network in large-scale Internet of things
CN111343730A (en) * 2020-04-15 2020-06-26 上海交通大学 Large-scale MIMO passive random access method under space correlation channel
CN111641570A (en) * 2020-04-17 2020-09-08 浙江大学 Joint equipment detection and channel estimation method based on deep learning
CN111683023A (en) * 2020-04-17 2020-09-18 浙江大学 Model-driven large-scale equipment detection method based on deep learning
CN112188539A (en) * 2020-10-10 2021-01-05 南京理工大学 Interference cancellation scheduling code design method based on deep reinforcement learning
CN112492686A (en) * 2020-11-13 2021-03-12 辽宁工程技术大学 Cellular network power distribution method based on deep double-Q network
CN112910806A (en) * 2021-01-19 2021-06-04 北京理工大学 Joint channel estimation and user activation detection method based on deep neural network
CN113303022A (en) * 2019-01-10 2021-08-24 苹果公司 2-step RACH fallback procedure
CN113344212A (en) * 2021-05-14 2021-09-03 香港中文大学(深圳) Model training method and device, computer equipment and readable storage medium
CN113438746A (en) * 2021-08-27 2021-09-24 香港中文大学(深圳) Large-scale random access method based on energy modulation
US20210319821A1 (en) * 2020-04-09 2021-10-14 Micron Technology, Inc. Integrated Circuit Device with Deep Learning Accelerator and Random Access Memory
CN113573284A (en) * 2021-06-21 2021-10-29 吉林大学 Random access backoff method for large-scale machine type communication based on machine learning
US11164348B1 (en) * 2020-06-29 2021-11-02 Tsinghua University Systems and methods for general-purpose temporal graph computing

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107743103A (en) * 2017-10-26 2018-02-27 北京交通大学 The multinode access detection of MMTC systems based on deep learning and channel estimation methods
CN107820321A (en) * 2017-10-31 2018-03-20 北京邮电大学 Large-scale consumer intelligence Access Algorithm in a kind of arrowband Internet of Things based on cellular network
US20200026247A1 (en) * 2018-07-19 2020-01-23 International Business Machines Corporation Continuous control of attention for a deep learning network
CN108882301A (en) * 2018-07-25 2018-11-23 西安交通大学 The nonopiate accidental access method kept out of the way in extensive M2M network based on optimal power
CN113303022A (en) * 2019-01-10 2021-08-24 苹果公司 2-step RACH fallback procedure
CN109862567A (en) * 2019-03-28 2019-06-07 电子科技大学 A kind of method of cell mobile communication systems access unlicensed spectrum
CN111224905A (en) * 2019-12-25 2020-06-02 西安交通大学 Multi-user detection method based on convolution residual error network in large-scale Internet of things
CN111182649A (en) * 2020-01-03 2020-05-19 浙江工业大学 Random access method based on large-scale MIMO
US20210319821A1 (en) * 2020-04-09 2021-10-14 Micron Technology, Inc. Integrated Circuit Device with Deep Learning Accelerator and Random Access Memory
CN111343730A (en) * 2020-04-15 2020-06-26 上海交通大学 Large-scale MIMO passive random access method under space correlation channel
CN111641570A (en) * 2020-04-17 2020-09-08 浙江大学 Joint equipment detection and channel estimation method based on deep learning
CN111683023A (en) * 2020-04-17 2020-09-18 浙江大学 Model-driven large-scale equipment detection method based on deep learning
US11164348B1 (en) * 2020-06-29 2021-11-02 Tsinghua University Systems and methods for general-purpose temporal graph computing
CN112188539A (en) * 2020-10-10 2021-01-05 南京理工大学 Interference cancellation scheduling code design method based on deep reinforcement learning
CN112492686A (en) * 2020-11-13 2021-03-12 辽宁工程技术大学 Cellular network power distribution method based on deep double-Q network
CN112910806A (en) * 2021-01-19 2021-06-04 北京理工大学 Joint channel estimation and user activation detection method based on deep neural network
CN113344212A (en) * 2021-05-14 2021-09-03 香港中文大学(深圳) Model training method and device, computer equipment and readable storage medium
CN113573284A (en) * 2021-06-21 2021-10-29 吉林大学 Random access backoff method for large-scale machine type communication based on machine learning
CN113438746A (en) * 2021-08-27 2021-09-24 香港中文大学(深圳) Large-scale random access method based on energy modulation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
INTEL CORPORATION: "R3-206403 "Use cases, AI/ML algorithms, and general concepts"", 《3GPP TSG_RAN\WG3_IU》 *
ZTE CORPORATION等: "R3-206092 "Initial Analyse on the Interface Impact with AI-based RAN Architecture"", 《3GPP TSG_RAN\WG3_IU》 *
史清江等: "面向5G/B5G通信的智能无线资源管理技术", 《中国科学基金》 *
彭木根等: "智简6G无线接入网:架构、技术和展望", 《北京邮电大学学报》 *
章嘉懿: "去蜂窝大规模MIMO***研究进展与发展趋势", 《重庆邮电大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN113766669B (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN111224906B (en) Approximate message transfer large-scale MIMO signal detection algorithm based on deep neural network
CN109246038B (en) Dual-drive GFDM receiver and method for data model
CN111698182A (en) Time-frequency blocking sparse channel estimation method based on compressed sensing
CN109525369B (en) Channel coding type blind identification method based on recurrent neural network
CN102104574A (en) Orthogonal frequency division multiplexing (OFDM)-transform domain communication system (TDCS) signal transmission and receiving methods, devices and system
CN111740934A (en) Underwater sound FBMC communication signal detection method based on deep learning
Ye et al. Circular convolutional auto-encoder for channel coding
Zhang et al. Deep learning based on orthogonal approximate message passing for CP-free OFDM
CN110430013B (en) RCM method based on deep learning
CN113438746B (en) Large-scale random access method based on energy modulation
CN107864029A (en) A kind of method for reducing Multiuser Detection complexity
CN110572340A (en) turbo time domain equalization method for short wave communication
CN113112028A (en) Machine learning time synchronization method based on label design
CN112215335A (en) System detection method based on deep learning
CN114500322A (en) Method for equipment activity detection and channel estimation under large-scale authorization-free access scene
CN113067666B (en) User activity and multi-user joint detection method of NOMA system
CN113766669B (en) Large-scale random access method based on deep learning network
CN110739977B (en) BCH code decoding method based on deep learning
CN102882654A (en) Encoding constraint and probability calculation based encoding and decoding synchronization method
CN115412416A (en) Low-complexity OTFS signal detection method for high-speed mobile scene
Wang et al. A Signal processing method of OFDM communication receiver based on CNN
Sakoda et al. Residue Effect of Parallel Interference Canceller in Belief Propagation Decoding in Massive MIMO Systems
Duan et al. A model‐driven robust deep learning wireless transceiver
Setzler et al. Deep Learning for Spectral Filling in Radio Frequency Applications
Li et al. Notice of Violation of IEEE Publication Principles: Channel Decoding Based on Complex-valued Convolutional Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant