CN110380776B - Internet of things system data collection method based on unmanned aerial vehicle - Google Patents

Internet of things system data collection method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN110380776B
CN110380776B CN201910777808.3A CN201910777808A CN110380776B CN 110380776 B CN110380776 B CN 110380776B CN 201910777808 A CN201910777808 A CN 201910777808A CN 110380776 B CN110380776 B CN 110380776B
Authority
CN
China
Prior art keywords
action
network
unmanned aerial
aerial vehicle
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910777808.3A
Other languages
Chinese (zh)
Other versions
CN110380776A (en
Inventor
梁应敞
曹阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910777808.3A priority Critical patent/CN110380776B/en
Publication of CN110380776A publication Critical patent/CN110380776A/en
Application granted granted Critical
Publication of CN110380776B publication Critical patent/CN110380776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18502Airborne stations
    • H04B7/18506Communications with or from aircraft, i.e. aeronautical mobile service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • H04W52/26TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service]
    • H04W52/267TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service] taking into account the information rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/30TPC using constraints in the total amount of available transmission power
    • H04W52/34TPC management, i.e. sharing limited amount of power among users or channels or data types, e.g. cell loading
    • H04W52/346TPC management, i.e. sharing limited amount of power among users or channels or data types, e.g. cell loading distributing total power among users or channels

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention belongs to the technical field of wireless communication, and relates to an Internet of things system data collection method based on an unmanned aerial vehicle. According to the invention, the unmanned aerial vehicle is used for collecting data and controlling the uplink transmission process of the IoT node, so that the system energy efficiency is optimized to improve the cruising ability of the Internet of things system. In the scheme of the invention, the unmanned aerial vehicle equipment does not need to utilize real-time network information during decision making, but extracts useful information from historical information and predicts the current network environment, so that the long-term energy efficiency of all IoT nodes in the system is maximized.

Description

Internet of things system data collection method based on unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of wireless communication, and relates to an Internet of things system data collection method based on an unmanned aerial vehicle.
Background
With the development of the Internet of Things (IoT), the storage quantity and the transmission data volume of the Internet of Things equipment both show exponential growth, which puts higher requirements on the data collection work of the Internet of Things equipment. In addition, the internet of things device is usually an energy-limited device, and cannot perform long-distance data transmission. Therefore, there is a need for an efficient, flexible and low-cost data collection method for the internet of things system. Unmanned Aerial Vehicles (UAVs) are considered a viable solution. Unlike traditional data collection devices that are fixed to the ground, unmanned aerial vehicles can fly to dynamically deploy in the air. This means that the drone device can move quickly to the data hotspot while performing data collection, without being limited by the environmental terrain.
In addition, the unmanned aerial vehicle can improve the channel gain between the unmanned aerial vehicle and the ground node by adjusting the distance between the unmanned aerial vehicle and the ground node in the data collecting process, so that the ground node can achieve higher transmission rate by using unit transmission power, and the overall performance of the Internet of things system is improved. Thus, drones may be used as an efficient data collection solution in internet of things systems.
Disclosure of Invention
Aiming at the problem that the number of the internet of things equipment is rapidly increased and data collection is difficult, the unmanned aerial vehicle is used for collecting data and controlling the uplink transmission process of the IoT node, and the energy efficiency of the system is optimized to improve the cruising ability of the internet of things system. The invention mainly focuses on an unmanned aerial vehicle Internet of things system, an unmanned aerial vehicle Internet of things model shown in figure 1 is designed, in the considered model, an unmanned aerial vehicle is used as a mobile base station to collect data of a plurality of ground nodes, namely the plurality of ground nodes simultaneously carry out uplink transmission on the unmanned aerial vehicle base station. The invention designs a transmission protocol in the data collection process, wherein each Time slot IoT node is allocated to a corresponding channel by the unmanned aerial vehicle equipment for transmission, and a plurality of nodes allocated to the same channel are transmitted in a Time Division Multiple Access (TDMA) mode.
In the present invention, the node energy efficiency is defined as the ratio of the transmission rate realized by the node to the transmission power used for transmission. The invention aims to improve the long-term energy efficiency of an Internet of things system, and the transmission channel and the transmission power of an IoT node are distributed by using an unmanned aerial vehicle device, so that the IoT node can transmit more data by using unit power, thereby improving the energy efficiency of the node and the cruising ability of the IoT node.
According to the invention, by means of Deep Learning (DRL), the unmanned aerial vehicle equipment does not need to collect global network information in real time, but learns the network environment change rule and predicts the environment change by using historical observation information of the network environment, so that corresponding channel and power distribution decisions are made. Compared with the traditional method, the scheme provided by the invention can effectively reduce the information overhead. In the present invention, the frame structure for uplink transmission consists of decision, transmission and training processes. The invention simultaneously utilizes two neural networks for decision making, namely an action network and a judgment network, wherein the action network is used for calculating the probability of a decision making space to obtain the strategy to be executed, and the judgment network is used for judging the superiority of the selected strategy. The evaluation network can assist the action network to achieve convergence, so that the designed control scheme can overcome the problem of difficult convergence of a high-dimensional decision space.
Furthermore, the invention pertains to online learning techniques, i.e. the drone device learns through short-term experience gained from its interaction with IoT nodes. The invention avoids the experience playback memory required in the traditional deep reinforcement learning, so that the unmanned aerial vehicle equipment does not need to store a large amount of historical network information, but only needs to record a small amount of interactive data at each training interval, thereby effectively reducing the storage overhead of the unmanned aerial vehicle equipment. In addition, after the method is converged, the required control decision can be directly obtained only by inputting the current network information into the neural network.
The invention has the advantages that the unmanned aerial vehicle equipment does not need to utilize real-time network information during decision making, but extracts useful information from historical information and predicts the current network environment, so that the long-term energy efficiency of all IoT nodes in the system is maximized.
Drawings
Fig. 1 shows a system model of the internet of things of the unmanned aerial vehicle in the invention.
Fig. 2 shows an uplink transmission frame structure in the present invention.
Fig. 3 shows the deep reinforcement learning decision and information interaction flow in the present invention.
Fig. 4 shows the performance comparison of the data collection control method based on deep reinforcement learning proposed by the present invention with other data collection control schemes.
Detailed Description
The following detailed description of specific embodiments of the invention is provided in connection with the accompanying drawings.
Fig. 1 shows an unmanned aerial vehicle internet of things system model in the present invention, which considers collecting data from ground IoT nodes using a mobile unmanned aerial vehicle base station and reasonably allocating transmission channels and transmission power of the nodes. As shown, in the system, one drone base station is deployed in the air, and M IoT nodes are randomly distributed throughout the area. Each IoT node has only one antenna and there is data to transmit to the drone base station at all times. The unmanned aerial vehicle base station works in a preset flight orbit, and K orthogonal channels with the same width are required to be distributed to the ground IoT nodes in each time slot for uplink data transmission. In this example, the number of channels is less than the number of IoT nodes, i.e., K < M. Each node can only be assigned to one channel for transmission in each time slot.
In the invention, all the air-ground channels from the unmanned aerial vehicles to the nodes are composed of two parts, namely a direct path (LOS) and a Non-direct path (NLOS), and the proportion coefficient of each component in the channel gain is determined by the elevation angle sigma between the unmanned aerial vehicle and the ground nodesiAnd the environment of the system, and the sum of the proportionality coefficients of the two components is 1. An example of an LOS diameter ratio can be expressed as:
Figure BDA0002175625440000031
wherein a, b, c, d, e represent the corresponding environmental parameters.
There are two parts, large-scale fading and small-scale fading, in both LOS and NLOS paths. The large scale fading is determined by the distance between the drone and the ground node, while the small scale fading remains constant in one frame, but varies from frame to frame. The small scale fading in the LOS path is a leis distribution, while in the NLOS path the small scale fading is a rayleigh distribution. The specific channel gain may be expressed as:
Figure BDA0002175625440000032
where f denotes the carrier frequency and v denotes the speed of light. Mu.sLOSAnd muNLOSRespectively, represent the antenna gains for the different components,
Figure BDA0002175625440000033
and
Figure BDA0002175625440000034
respectively, a rayleigh distribution and a rice distribution.
In the invention, the transmission power of a node i in a time slot t of a channel k is Pi,k,tWhere the signal-to-noise ratio can be expressed as
Figure BDA0002175625440000035
In the invention, for a plurality of IoT nodes accessed to the same channel, a Time Division Multiple Access (TDMA) mode is adopted to transmit data to the unmanned aerial vehicle equipment, namely, a time slot is divided into the number N of access usersk,tAnd each node is allocated with one small time slot for transmission. Meanwhile, in each time slot, the unmanned aerial vehicle equipment distributes the transmission power of all the nodes, so that the transmission rate c realized by the node i in the time slot t of the channel ki,k,tCan be expressed as
Figure BDA0002175625440000041
Energy efficiency rho of IoT node i in channel k time slot ti,k,tDefining the transmission rate c realized therefori,k,tAnd the transmission power P usedi,k,tRatio of (i) to (ii)
Figure BDA0002175625440000042
The aim of the invention is to maximize the minimum energy efficiency η of the IoT nodestCan be expressed as
Figure BDA0002175625440000043
Figure BDA0002175625440000044
Figure BDA0002175625440000045
Figure BDA0002175625440000046
Wherein the content of the first and second substances,
Figure BDA0002175625440000047
is the minimum energy efficiency, I, realized by all nodes at time ti,k,tIndicates whether the ith node is allocated to channel k at time slot t, and indicates that the ith node is allocated to channel k when the variable value is 1, and indicates that the ith node is allocated to channel k when the variable value is 0.
Fig. 3 shows a frame structure of uplink transmission in the present invention. In the invention, an online learning mode is adopted, interactive data at n moments are utilized, namely, the established neural network is trained, namely, the training is carried out once every n moments, and if the recording starting moment is represented as tstartWhen t is equal to tstartAnd training at + n time. Defining a frame structure at a training moment as a training frame structure, and defining a frame structure at a non-training moment as a common frame structure, wherein the common frame structure comprises a decision stage and a transmission stage, namely, an unmanned aerial vehicle base station firstly utilizes an established neural network to obtain a current control decision, and then an IoT node transmits data to the unmanned aerial vehicle base station according to corresponding decision information; the training frame structure comprises a decision phase, a transmission phase and a training phase, and is different from the common frame structure in that the training phase utilizes the memoriesRecording of recordings<s(tstart),a(tstart),r(tstart) …, s (t) >, for training the action neural network and the judgment neural network, the recorded mutual information will be cleared after the training is completed, and the mutual information will be recorded again at a new time. When the neural network converges, it is no longer necessary to train the neural network, so that only the first frame structure exists at this time.
Fig. 2 shows the deep reinforcement learning decision and information interaction process in the invention. The system mainly comprises two parts, namely an unmanned aerial vehicle base station and a ground IoT node. Wherein, the drone base station is used as a decision maker, and all IoT nodes can be regarded as an environment. The drone needs to establish two neural networks, called action network and evaluation network, respectively. The unmanned aerial vehicle can obtain a state s (t) from the environment by observing when each time slot starts, obtain the probability corresponding to each decision (action) after inputting the state into the actor neural network, and select an action a (t) from the action space to execute according to the obtained probability. The decision of the drone mainly consists of two parts, transmission channel and transmission power of IoT nodes, respectively. After the selected strategy is executed, the drone base station obtains an immediate feedback r (t) to characterize the current benefit of the selected decision, and a new state s (t + 1). After interaction is performed every time, the unmanned aerial vehicle device needs to record an interaction track and train the neural network once at every n moments.
The method adopts deep reinforcement learning to make the allocation decision of transmission channels and transmission power, and specifically comprises the following steps:
at the beginning of each frame, the drone base station will acquire a corresponding state by observing the environment. The state s (t) of the unmanned aerial vehicle base station mainly comprises 4 parts, namely the channel number k of the last time slot accessi,t-1Channel gain of last time slot node
Figure BDA0002175625440000051
Transmission rate realized by each node in last time slot
Figure BDA0002175625440000052
And the number N of users of each channel in the last time slotk,t-1I.e. by
Figure BDA0002175625440000053
The action a (t) made by the drone base station is
Figure BDA0002175625440000054
After performing the selected action, the drone base station obtains an immediate reward r (t) and a new state s (t +1) corresponding to the next time instant. In this patent, the reward function is set to the minimum energy efficiency that is currently achieved by all IoT nodes, i.e., the reward function is set to
r(t)=ηt
After obtaining the reward and the new state, the drone base station combines the state value s (t) at the current moment, the selected action a (t), the reward r (t) representing the income obtained after the action is executed, and the new state s (t +1) to be used as interaction data < s (t), a (t), r (t), s (t +1) >, and records the interaction data in the interaction track cache.
The unmanned aerial vehicle base station establishes two neural networks in an initialization stage, wherein one neural network is called as an action network pi (a | s; theta), theta is an action neural network parameter and is responsible for outputting a probability value of a corresponding action according to a current input state and selecting the action (namely a corresponding transmission channel and a distribution decision of transmission power) to execute according to the probability; the other is called the evaluation network V (s; theta)v),θvIs used for evaluating network parameters, and is responsible for estimating current input state and calculating time difference error r (t) + gamma V (s (t + 1); thetav)-V(s(t);θv) Wherein γ ∈ (0, 1)]The discount coefficient represents the influence of the future on the current moment, and since r (t) is obtained by the fact that the unmanned aerial vehicle base station is in the state s (t) and executes the action a (t), the evaluation network can evaluate the quality of the selected action and assist the action network in converging. Both neural networks are fully connected networks, and the parameters thereof are initialized randomly。
During training, the unmanned aerial vehicle equipment forms a complete interactive track by using the interactive data recorded in the interactive track cache at the continuous n moments as training data < s (t) of the judgment neural networkstart),a(tstart),r(tstart),…,s(t)>. Firstly, calculating by using an interactive track and a judgment neural network to obtain time difference errors corresponding to n moments
Figure BDA0002175625440000061
Then, the action network and the judgment neural network are trained by using the random gradient descent algorithm by using the obtained time difference error and the interactive track, and parameters theta and theta of the action network and the judgment neural network arevAnd (6) updating.
Because the judgment network is used to assist the action network to train, when the action network reaches convergence, the judgment network will be closed, and the decision of transmission channel and transmission power allocation controlled by user data collection is only obtained by the trained action network.
Fig. 4 shows a comparison of the performance of the proposed deep reinforcement learning control scheme of the present invention with other control schemes. For comparison, the performance of three conventional methods, namely an optimal scheme, a deep Q-network-based scheme and a random scheme, is shown in the figure. The optimal solution is obtained through a search algorithm under the condition that the global network information is known, and can be regarded as an upper performance bound. Through the graph 4, it can be found that after the algorithm is trained for a period of time to reach convergence, the minimum energy efficiency performance which can be realized by the algorithm can gradually approach the optimal performance and is far superior to the other two methods, and the superiority of the method in the aspect of improving the node energy efficiency of the internet of things system is proved.

Claims (1)

1. An Internet of things system data collection method based on an Unmanned Aerial Vehicle (UAV) utilizes a mobile UAV base station to collect data from ground IoT nodes and distributes transmission channels and transmission power of the nodes, and is characterized in that node energy efficiency is defined as the ratio of the transmission rate realized by the nodes to the transmission power used for transmissionTo maximize the minimum energy efficiency η of IoT nodestEstablishing a target model:
Figure FDA0002926757420000011
Figure FDA0002926757420000012
Figure FDA0002926757420000013
Figure FDA0002926757420000014
wherein the content of the first and second substances,
Figure FDA0002926757420000015
is the minimum energy efficiency, I, realized by all nodes at time ti,k,tIndicating whether the ith node is allocated to the channel K or not at the time slot t, wherein the variable value is 1 to indicate that the ith node is allocated to the channel K, 0 to indicate that the ith node is not allocated to the channel K, M and K respectively indicate a node set and a channel set, and ci,k,tFor the transmission rate, P, of node i in time slot t of channel ki,k,tFor the transmission power of the node i in the channel k time slot t, the energy efficiency of the node i in the channel k time slot t
Figure FDA0002926757420000016
The method adopts deep reinforcement learning to make the allocation decision of transmission channels and transmission power, and specifically comprises the following steps:
at the beginning of each frame, the drone base station obtains a corresponding state s (t) by observing the environment, wherein the state s (t) of the drone base station mainly comprises 4 parts, namely a channel number k accessed in the last time sloti,t-1Of the last time-slot nodeChannel gain
Figure FDA0002926757420000017
Transmission rate realized by each node in last time slot
Figure FDA0002926757420000018
And the number N of users of each channel in the last time slotk,t-1I.e. by
Figure FDA0002926757420000019
The action a (t) made by the drone base station is
Figure FDA0002926757420000021
After performing the selected action, the drone base station obtains an immediate reward r (t) and a new state s (t +1) corresponding to the next time instant, with the reward function set to the energy efficiency minimum achieved by all current IoT nodes, i.e., the reward function is set to
r(t)=ηt
After obtaining the reward and the new state, the unmanned aerial vehicle base station combines the state value s (t) of the current moment, the selected action a (t), the reward r (t) for representing the income obtained after the action and the new state s (t +1) to be used as interaction data < s (t), a (t), r (t), s (t +1) >, and records the interaction data in an interaction track cache;
the unmanned aerial vehicle base station establishes two neural networks in an initialization stage, wherein one neural network is defined as an action network pi (a | s; theta), theta is an action neural network parameter and is responsible for outputting a probability value of a corresponding action a according to a currently input state s and selecting an action to execute according to the probability; the other is defined as the evaluation network V (s; theta)v),θvIs used for evaluating network parameters, and is responsible for estimating current input state and calculating time difference error r (t) + gamma V (s (t + 1); thetav)-V(s(t);θv) Wherein γ ∈ (0, 1)]Is a discount coefficient, representsThe future influence on the current moment, r (t), is obtained by the fact that the unmanned aerial vehicle base station is in the state s (t) and executes the action a (t), and the judging network is used for judging the quality of the selected action and assisting the action network to converge; the two neural networks are all fully connected networks, and the parameters of the two neural networks are initialized randomly;
training the established neural network by using interactive data at n moments in an online learning manner, namely training every n moments, and defining the recording starting moment as tstartWhen t is equal to tstartTraining at + n time; defining a frame structure at a training moment as a training frame structure, and defining a frame structure at a non-training moment as a common frame structure, wherein the common frame structure comprises a decision stage and a transmission stage, namely, an unmanned aerial vehicle base station firstly utilizes an established neural network to obtain a current control decision, and then an IoT node transmits data to the unmanned aerial vehicle base station according to corresponding decision information; the training frame structure comprises a decision phase, a transmission phase and a training phase, and is distinguished from the ordinary frame structure in that the training phase utilizes recorded records<s(tstart),a(tstart),r(tstart),…,s(t)>For training the action neural network and the judgment neural network and updating the parameters theta and theta of the neural networkvAfter the training is finished, the recorded interactive information is cleared, and the recorded interactive information is recorded again at a new moment;
when training is carried out, the unmanned aerial vehicle equipment forms a complete interactive track by using the interactive data recorded in the interactive track cache at the continuous n moments as training data of the judgment neural network<s(tstart),a(tstart),r(tstart),…,s(t)>Firstly, calculating by using an interactive track and a judgment neural network to obtain time difference errors corresponding to n moments
Figure FDA0002926757420000031
Then, the action network and the judgment neural network are trained by using the random gradient descent algorithm by using the obtained time difference error and the interactive track, and parameters theta and theta of the action network and the judgment neural network arevUpdating is carried out;
the judging network is used for assisting the action network to train, when the action network reaches convergence, the judging network is closed, and the distribution decision of the transmission channel and the transmission power controlled by the user data collection is only obtained by the trained action network.
CN201910777808.3A 2019-08-22 2019-08-22 Internet of things system data collection method based on unmanned aerial vehicle Active CN110380776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910777808.3A CN110380776B (en) 2019-08-22 2019-08-22 Internet of things system data collection method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910777808.3A CN110380776B (en) 2019-08-22 2019-08-22 Internet of things system data collection method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN110380776A CN110380776A (en) 2019-10-25
CN110380776B true CN110380776B (en) 2021-05-14

Family

ID=68260301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910777808.3A Active CN110380776B (en) 2019-08-22 2019-08-22 Internet of things system data collection method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN110380776B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111031513B (en) * 2019-12-02 2020-12-15 北京邮电大学 Multi-unmanned-aerial-vehicle-assisted Internet-of-things communication method and system
CN111629383B (en) * 2020-05-09 2021-06-29 清华大学 Channel prediction method and device for pre-deployment of mobile air base station
CN111698717B (en) * 2020-05-26 2021-11-26 清华大学 Network transmission parameter selection method, device, equipment and storage medium
CN112601291A (en) * 2020-12-09 2021-04-02 广州技象科技有限公司 Low-conflict access method, device, system and storage medium based on channel detection
CN113259946A (en) * 2021-01-14 2021-08-13 西安交通大学 Ground-to-air full coverage power control and protocol design method based on centralized array antenna
CN113194446B (en) * 2021-04-21 2022-03-15 北京航空航天大学 Unmanned aerial vehicle auxiliary machine communication method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106961684A (en) * 2017-03-24 2017-07-18 厦门大学 The cognitive radio null tone two dimension meaning interference method against the enemy learnt based on deeply
CN108243431A (en) * 2017-08-28 2018-07-03 南京邮电大学 The power distribution algorithm of unmanned plane relay system based on efficiency optiaml ciriterion
CN108353081A (en) * 2015-09-28 2018-07-31 13部门有限公司 Unmanned plane intrusion detection and confrontation
CN109445462A (en) * 2018-11-30 2019-03-08 电子科技大学 A kind of unmanned plane robust paths planning method under uncertain condition
CN109474980A (en) * 2018-12-14 2019-03-15 北京科技大学 A kind of wireless network resource distribution method based on depth enhancing study
CN109511134A (en) * 2018-10-23 2019-03-22 郑州航空工业管理学院 Based on the unmanned plane auxiliary radio communication system load shunt method that efficiency is optimal
CN109743099A (en) * 2019-01-10 2019-05-10 深圳市简智联信息科技有限公司 Mobile edge calculations system and its resource allocation methods
CN110012547A (en) * 2019-04-12 2019-07-12 电子科技大学 A kind of method of user-association in symbiosis network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10667213B2 (en) * 2015-08-05 2020-05-26 Samsung Electronics Co., Ltd. Apparatus and method for power saving for cellular internet of things devices
US20190205736A1 (en) * 2017-12-29 2019-07-04 Intel Corporation Compute optimization mechanism for deep neural networks
CN108337024B (en) * 2018-02-06 2021-02-09 重庆邮电大学 Large-scale MIMO system energy efficiency optimization method based on energy collection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353081A (en) * 2015-09-28 2018-07-31 13部门有限公司 Unmanned plane intrusion detection and confrontation
CN106961684A (en) * 2017-03-24 2017-07-18 厦门大学 The cognitive radio null tone two dimension meaning interference method against the enemy learnt based on deeply
CN108243431A (en) * 2017-08-28 2018-07-03 南京邮电大学 The power distribution algorithm of unmanned plane relay system based on efficiency optiaml ciriterion
CN109511134A (en) * 2018-10-23 2019-03-22 郑州航空工业管理学院 Based on the unmanned plane auxiliary radio communication system load shunt method that efficiency is optimal
CN109445462A (en) * 2018-11-30 2019-03-08 电子科技大学 A kind of unmanned plane robust paths planning method under uncertain condition
CN109474980A (en) * 2018-12-14 2019-03-15 北京科技大学 A kind of wireless network resource distribution method based on depth enhancing study
CN109743099A (en) * 2019-01-10 2019-05-10 深圳市简智联信息科技有限公司 Mobile edge calculations system and its resource allocation methods
CN110012547A (en) * 2019-04-12 2019-07-12 电子科技大学 A kind of method of user-association in symbiosis network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
.《3GPP TR 22.891 V1.3.1》.2016,全文. *
3GPP.3rd Generation Partnership Project *
Enabling Deep Learning on IoT Edge: Approaches and Evaluation;Xuan Qi;《2018 Third ACM/IEEE Symposium on Edge Computing》;20181210;第367-372页 *
Energy-Efficient UAV Backscatter;Ying-Chang Liang;《2019 IEEE International Conference on Communications (ICC)》;20190715;全文 *
Optimal Power Allocation for Fading Channels in;Ying-Chang Liang;《2008 IEEE International Conference on Communications》;20080530;第3568-3572页 *
Spectrum Sharing Protocols based on;Ghaith Hattab;《2018 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)》;20190114;全文 *
认知网络中多传输功率感知与辨识技术研究;王丹洋;《中国博士学位论文全文数据库-信息科技辑》;20190115;I136-244页 *

Also Published As

Publication number Publication date
CN110380776A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110380776B (en) Internet of things system data collection method based on unmanned aerial vehicle
Hu et al. Reinforcement learning for a cellular internet of UAVs: Protocol design, trajectory control, and resource management
CN109743210B (en) Unmanned aerial vehicle network multi-user access control method based on deep reinforcement learning
CN113162679A (en) DDPG algorithm-based IRS (inter-Range instrumentation System) auxiliary unmanned aerial vehicle communication joint optimization method
CN114025330B (en) Air-ground cooperative self-organizing network data transmission method
CN111526592B (en) Non-cooperative multi-agent power control method used in wireless interference channel
CN112383922A (en) Deep reinforcement learning frequency spectrum sharing method based on prior experience replay
CN115499921A (en) Three-dimensional trajectory design and resource scheduling optimization method for complex unmanned aerial vehicle network
Bayerlein et al. Learning to rest: A Q-learning approach to flying base station trajectory design with landing spots
CN113055078A (en) Effective information age determination method and unmanned aerial vehicle flight trajectory optimization method
Li et al. Online velocity control and data capture of drones for the internet of things: An onboard deep reinforcement learning approach
CN113406965A (en) Unmanned aerial vehicle energy consumption optimization method based on reinforcement learning
CN113255218A (en) Unmanned aerial vehicle autonomous navigation and resource scheduling method of wireless self-powered communication network
CN113382060B (en) Unmanned aerial vehicle track optimization method and system in Internet of things data collection
CN116582871A (en) Unmanned aerial vehicle cluster federal learning model optimization method based on topology optimization
Cui et al. Model-free based automated trajectory optimization for UAVs toward data transmission
CN114980126A (en) Method for realizing unmanned aerial vehicle relay communication system based on depth certainty strategy gradient algorithm
CN115412156B (en) Urban monitoring-oriented satellite energy-carrying Internet of things resource optimal allocation method
CN116009590B (en) Unmanned aerial vehicle network distributed track planning method, system, equipment and medium
CN115765826B (en) Unmanned aerial vehicle network topology reconstruction method for on-demand service
Fu et al. Joint speed and bandwidth optimized strategy of UAV-assisted data collection in post-disaster areas
CN116074974A (en) Multi-unmanned aerial vehicle group channel access control method under layered architecture
CN112383893B (en) Time-sharing-based wireless power transmission method for chargeable sensing network
CN115483964A (en) Air-space-ground integrated Internet of things communication resource joint allocation method
CN115278905A (en) Multi-node communication opportunity determination method for unmanned aerial vehicle network transmission

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant