WO2024036753A1 - Method and apparatus for slice scheduling - Google Patents

Method and apparatus for slice scheduling Download PDF

Info

Publication number
WO2024036753A1
WO2024036753A1 PCT/CN2022/128343 CN2022128343W WO2024036753A1 WO 2024036753 A1 WO2024036753 A1 WO 2024036753A1 CN 2022128343 W CN2022128343 W CN 2022128343W WO 2024036753 A1 WO2024036753 A1 WO 2024036753A1
Authority
WO
WIPO (PCT)
Prior art keywords
slices
network device
lchs
slice
respective slices
Prior art date
Application number
PCT/CN2022/128343
Other languages
French (fr)
Inventor
Thomas Stark
Prasanna MUDLAPPA
Dereje KIFLE
Jiefeng JIN
Dingwen YUAN
Original Assignee
Nokia Shanghai Bell Co., Ltd.
Nokia Solutions And Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co., Ltd., Nokia Solutions And Networks Oy filed Critical Nokia Shanghai Bell Co., Ltd.
Publication of WO2024036753A1 publication Critical patent/WO2024036753A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/20Control channels or signalling for resource management
    • H04W72/21Control channels or signalling for resource management in the uplink direction of a wireless link, i.e. towards the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0278Traffic management, e.g. flow control or congestion control using buffer status reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/20Control channels or signalling for resource management
    • H04W72/23Control channels or signalling for resource management in the downlink direction of a wireless link, i.e. towards a terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling
    • H04W72/1263Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows
    • H04W72/1268Mapping of traffic onto schedule, e.g. scheduled allocation or multiplexing of flows of uplink data flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • H04W72/563Allocation or scheduling criteria for wireless resources based on priority criteria of the wireless resources

Definitions

  • Embodiments of the present disclosure generally relate to the field of communication, and in particular, to a method, device, apparatus and a computer readable storage medium for slice scheduling.
  • Network slicing is a technology that may support these services simultaneously with service differentiation and guaranteed performance.
  • Network slicing accommodates several independent logical networks for different business needs and service level agreement (SLA) requirements while running on shared physical infrastructure.
  • SLA service level agreement
  • example embodiments of the present disclosure provide a solution for slice scheduling.
  • the network device may comprise at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the network device at least to determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  • the terminal device may comprise at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the terminal device at least to report, to a network device, buffer statuses for respective logical channel groups, LCGs; and receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
  • a method implemented at a network device may comprise determining buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and performing resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  • a method implemented at a terminal device may comprise reporting, to a network device, buffer statuses for respective logical channel groups, LCGs; and receiving, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs and quotas associated with respective slices.
  • an apparatus of a network device may comprise means for determining buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and means for performing resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  • an apparatus of a terminal device may comprise means for reporting, to a network device, buffer statuses for respective logical channel groups, LCGs; and means for receiving, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs and quotas associated with respective slices.
  • a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the method according to third or fourth aspect.
  • a computer program comprising instructions, which, when executed by an apparatus, cause the apparatus at least to: determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  • a computer program comprising instructions, which, when executed by an apparatus, cause the apparatus at least to: report, to a network device, buffer statuses for respective logical channel groups, LCGs; and receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
  • a network device comprising determining circuitry configured to determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and performing circuitry configured to perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices .
  • a terminal device comprising reporting circuitry configured to report, to a network device, buffer statuses for respective logical channel groups, LCGs; and receiving circuitry configured to receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
  • Fig. 1 illustrates an example network environment in which example embodiments of the present disclosure may be implemented
  • Fig. 2 illustrates an example relationship among logical channels (LCHs) , logical channel groups (LCGs) and slices in which example embodiments of the present disclosure may be implemented;
  • LCHs logical channels
  • LCDs logical channel groups
  • Fig. 3 illustrates an example use-case which may be used in some embodiments of the present disclosure
  • Fig. 4 illustrates an example flowchart of a method implemented at a network device according to some embodiments of the present disclosure
  • Fig. 5 illustrates example LCHs and corresponding buffer sizes in different slots in some embodiments of the present disclosure
  • Fig. 6 illustrates example BSR report occasions and UL scheduling slots in some embodiments of the present disclosure
  • Fig. 7 illustrates an example overall architecture which may be used in some embodiments of the present disclosure
  • Fig. 8 illustrates another example use-case which may be used in some embodiments of the present disclosure
  • Fig. 9 illustrates an example flowchart of a method implemented at a terminal device according to some embodiments of the present disclosure
  • Fig. 10 illustrates an example simplified block diagram of an apparatus that is suitable for implementing embodiments of the present disclosure.
  • Fig. 11 illustrates an example block diagram of an example computer readable medium in accordance with some embodiments of the present disclosure.
  • references in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the listed terms.
  • circuitry may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
  • the term “communication network” refers to a network following any suitable communication standards, such as long term evolution (LTE) , LTE-advanced (LTE-A) , wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , narrow band Internet of things (NB-IoT) and so on.
  • LTE long term evolution
  • LTE-A LTE-advanced
  • WCDMA wideband code division multiple access
  • HSPA high-speed packet access
  • NB-IoT narrow band Internet of things
  • the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including, but not limited to, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or beyond.
  • Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will of course also be future type communication technologies and systems with which the present disclosure may be
  • the term “network device” refers to a node in a communication network via which a terminal device accesses the network and receives services therefrom.
  • the network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , an evolved NodeB (eNodeB or eNB) , a NR NB (also referred to as a gNB) , a remote radio unit (RRU) , a radio header (RH) , a remote radio head (RRH) , a relay, a low power node such as a femto, a pico, and so forth, depending on the applied terminology and technology.
  • BS base station
  • AP access point
  • NodeB or NB node B
  • eNodeB or eNB evolved NodeB
  • NR NB also referred to as a gNB
  • RRU remote radio unit
  • RH radio header
  • terminal device refers to any end device that may be capable of wireless communication.
  • a terminal device may also be referred to as a communication device, user equipment (UE) , a subscriber station (SS) , a portable subscriber Station, a mobile station (MS) , or an access terminal (AT) .
  • UE user equipment
  • SS subscriber station
  • MS mobile station
  • AT access terminal
  • the terminal device may include, but not limited to, a mobile phone, a cellular phone, a smart phone, voice over IP (VoIP) phones, wireless local loop phones, a tablet, a wearable terminal device, a personal digital assistant (PDA) , portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehicle-mounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , USB dongles, smart devices, wireless customer-premises equipment (CPE) , an Internet of things (loT) device, a watch or other wearable, a head-mounted display (HMD) , a vehicle, a drone, a medical device and applications (e.g., remote surgery) , an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts) , a consumer electronics device, a device operating on commercial and/
  • network slice refers to a logical network that provides specific network capabilities and network characteristics. Operators may divide a network into multiple virtual end-to-end networks on a unified infrastructure. Each slice is logically isolated in terms of the radio access network, the bearer network, the core network, etc., and includes its own unique delay, throughput, security and bandwidth features to meet the requirements of a wide variety of applications.
  • logical channel refers to channels divided according to functions, which may be used to convert data formats between transport channels and bearers, etc.
  • data radio bearer or “DRB” refers to a radio bearer only used for user plane IP packets between the air interface of the UE and the base station. It may be understood that logical channels or channels used herein may be referred to represent DRBs. In the following description, the terms “logical channel” , “channel” “data radio bearer” and “bearer” may be used interchangeably.
  • Network slicing may support these services simultaneously with service differentiation and guaranteed performance and may accommodate several independent logical networks for different business needs and SLA requirements while running on shared physical infrastructure.
  • RAN slicing will allow new business models to evolve.
  • a mobile operator may be able to:
  • i support multiple slices/public land mobile networks (PLMN) with agreed share of RAN resources indicated by SLA.
  • PLMN slices/public land mobile networks
  • ii network slicing will allow the operator to customize the resources for given traffic characteristics, services &SLAs.
  • buffer status reports sent by UE to the serving gNB provide details on an amount of data waiting for transmission in the UL buffers at UE.
  • BSR buffer status reports
  • UE does not send any slice-specific details (that the resources have to be allocated from that specific slice-quota) while requesting uplink grants.
  • allocation of grants by gNB is performed at per-UE level.
  • the UE selects bearer (s) for data transmission according to priorities and other parameters configured on UE by gNB.
  • s bearer
  • UE uses the standardized “logical channel prioritization” procedure (LCP) while allocating resources to transmit data in uplink.
  • LCP logical channel prioritization
  • UE when UE requests gNB for uplink grants, it does not specify for which particular logical channel it is requesting (but requests at LCG level) and when the gNB grants resources, it does so at a “UE level” and does not enforce that the allocated resources have to be used for a particular logical channel belonging to a slice from whose slice quota the resources were granted.
  • the uplink multiplexing is done according to a set of well-defined rules in the UE (as per the logical channel prioritization procedure) . This makes it difficult to manage and enforce slice specific quotas to their respective logical channels in uplink transmissions.
  • slice scheduling in uplink transmission.
  • a network device determines buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs. Based on the buffer statuses for respective LCHs and quotas associated with respective slices, the network device performs resource allocation from respective slices to the terminal devices. As such, in embodiments of the present disclosure, the network device may determine the amount of resources required at the slice level. Therefore, RAN slice resource quotas in uplink direction may be effectively controlled and UL grants from respective slices may appropriately allocated, thereby improving scheduling efficiency and resource utilization.
  • Fig. 1 illustrates an example network environment 100 in which example embodiments of the present disclosure may be implemented.
  • the environment 100 which may be a part of a communication network, comprises terminal devices and network devices.
  • the communication network 100 may comprise a network device 110 (hereinafter may also be referred to as a gNB 110) .
  • the communication network 100 may further comprise a terminal device 120.
  • the network device 110 may manage a cell.
  • the network device 110 and the terminal device 120 may communicate data and control information to each other in the coverage of the cell.
  • a link from the network device 110 to the terminal device 120 is referred to as a downlink (DL)
  • DL downlink
  • UL uplink
  • the system 100 may include any suitable number of network devices and terminal devices adapted for implementing embodiments of the present disclosure. Although not shown, it would be appreciated that one or more terminal devices may be located in the environment 100.
  • Communications in the network environment 100 may be implemented according to any proper communication protocol (s) , comprising, but not limited to, the third generation (3G) , the fourth generation (4G) , the fifth generation (5G) or beyond, wireless local network communication protocols such as Institute for Electrical and Electronics Engineers (IEEE) 802.11 and the like, and/or any other protocols currently known or to be developed in the future.
  • s any proper communication protocol
  • 3G third generation
  • 4G fourth generation
  • 5G Fifth generation
  • IEEE Institute for Electrical and Electronics Engineers
  • the communication may utilize any proper wireless communication technology, comprising but not limited to: Multiple-Input Multiple-Output (MIMO) , Orthogonal Frequency Division Multiplexing (OFDM) , time division multiplexing (TDM) , frequency division multiplexing (FDM) , code division multiplexing (CDM) , Bluetooth, ZigBee, and machine type communication (MTC) , enhanced mobile broadband (eMBB) , massive machine type communication (mMTC) , ultra-reliable low latency communication (URLLC) , Carrier Aggregation (CA) , Dual Connection (DC) , and New Radio Unlicensed (NR-U) technologies.
  • MIMO Multiple-Input Multiple-Output
  • OFDM Orthogonal Frequency Division Multiplexing
  • TDM time division multiplexing
  • FDM frequency division multiplexing
  • CDM code division multiplexing
  • Bluetooth ZigBee
  • MTC machine type communication
  • MTC enhanced mobile broadband
  • mMTC massive machine type communication
  • URLLC ultra-
  • Fig. 2 illustrates an example relationship among LCHs, LCGs and slices in which example embodiments of the present disclosure may be implemented.
  • a plurality of LCHs, LCH-1. LCH-2 and LCH-3 may form a LCG.
  • Different LCHs may have their respective quotas in different slices SLICE-1, SLICE-2, SLICE-3, but that indication will not be sent to gNB while requesting grants.
  • the reported buffer status is at LCG level and not at per-LCH. It is to be understood that the number of LCHs, LCGs and slices is only for the purpose of illustration without suggesting any limitations.
  • the RAN resources to be controlled may be physical resource blocks (PRB) . These resources may be divided into slices as per their percentage of quotas from the available max resources, in accordance with SLAs. Say for a FR1 TDD cell, if there are 3 slices (for example, slice-1, slice-2 and slice-3) and their quotas are 33%, 20%and 47%, and then they may get 90, 54 and 128 PRBs respectively from the total 273 PRBs in a slot.
  • the objective of slice-volume control is to ensure that the UEs or bearers of the UEs with their due quota in specific slice (s) make use of the same one priority. As resources are not expected to go unused, any resources not used by UEs belonging to a slice are allocated to UEs from other slices in every slot as per SLAs.
  • Fig. 3 illustrates an example use-case which may be used in some embodiments of the present disclosure.
  • Different logical channels belonging to a particular LCG may have resource quotas on different slices.
  • the BSR is at LCG-level and there is no indication of specific logical channel (s) requesting for resources, there is no direct way to map resource requests to slices. So, indirect means as disclosed in the present disclosure need to use.
  • Fig. 4 illustrates an example flowchart of a method 400 implemented at a network device according to some embodiments of the present disclosure.
  • the method 400 will be described from the perspective of the network device 110 with reference to Fig. 1 and Fig. 3. It is to be understood that method 400 may further include additional blocks not shown and/or omit some shown blocks, and the scope of the present disclosure is not limited in this regard.
  • the network device 110 may determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs.
  • LCHs in a LCG are associated with two or more slices as illustrated in Fig. 3.
  • the determining buffer statuses for respective slices may comprise: obtaining the buffer statuses for respective LCHs associated with respective slices using a buffer status prediction model based on buffer statuses reported by terminal devices for LCGs.
  • the obtaining the buffer statuses for the respective LCHs using a buffer status prediction model may comprise: estimating buffer statuses for the respective LCHs within an estimation window based on historical buffer status reports from terminal devices within a statistics window before the estimation window using the buffer status prediction model.
  • a machine learning (ML) model is used as the buffer status prediction model to estimate the data waiting for uplink grants on each LCH at UE.
  • This gives an indication of how many resources to be granted (considering the spectral efficiency) to specific LCHs from their respective slices in accordance with the slice-quotas.
  • UE does not report per-LCH buffer status but only at LCG level.
  • this ML based buffer estimation provides an indication of the amount of resources to be allocated from the slice (s) .
  • LCH long term memory
  • LCGs long term memory
  • the gNB may only deduct the buffer status of LCH based on what is received, and the received bytes are the out-going data of the UE buffer.
  • the assumption here is the pattern of the out-going data of UE buffer is steady in a certain period.
  • R total (n) and R LC (i) (n) are denoted for the total received bytes for a LCG and received bytes for logical channel i respectively in the n-th statistic window:
  • n represents the sequence number of BSR and n also represents the n-th statistic window;
  • t represents the statistic time of the statistic window;
  • t 0 (n) is the starting time point (in slot) of the n-th statistic window, which may be the time point when BSR is received,
  • t 0 (n+1) is the starting time point (in slot) of the (n+1) -th statistic window.
  • a logical channel may occupy a constant or prioritized part and a flexible part if there are still spare bytes in the TB.
  • logical channel LC0 occupies constant parts and flexible parts in slot 0 and slot 1.
  • the sizes of the constant parts in slot 0 and slot 1 are the same.
  • the sizes of the flexible parts in slot 0 and slot 1 are different, and the sizes of flexible part depend on transport block size (TBS) .
  • TBS transport block size
  • the model of the received data amount for respective logical channel may be defined as below:
  • k LC (i) represents the proportion of a specific logical channel’s data regarding to all data of its logical channel group
  • R total represents the total received bytes for a LCG
  • b LC (i) represents the bias introduced for guaranteed or prioritized bit rate of certain logical channels.
  • the initial value for k LC (i) may be and the initial value for b LC (i) may be 0, wherein N represents the number of the LCHs in the LCG.
  • the buffer status prediction model has a first coefficient and the second coefficient, and wherein the first coefficient denotes a proportion of data of the LCH to data of the LCG, and the second coefficient denotes a bias for a guaranteed or prioritized bit rate of the LCH.
  • the first coefficient and the second coefficient of the buffer status prediction model are determined by: obtaining the first coefficient and the second coefficient based on historical information of de-multiplexing results of packets received by the network device by using a machine learning algorithm.
  • the history of MAC entity’s de-multiplexing results of received packets may be used to estimate two coefficients k LC (i) and b LC (i) , wherein k LC (i) may be the first coefficient and b LC (i) may be the second efficient.
  • a cost function may be set up as below:
  • N represents the N-th statistic window.
  • n n-th statistic window
  • i the sequence number of LCH
  • N the N-th statistic window
  • k LC (i) (N) represents the proportion of a specific logical channel’s data regarding to all data of its logical channel group in the N-th statistic window
  • b LC (i) (N) represents the bias introduced for guaranteed or prioritized bit rate of certain logical channels in the N-th statistic window.
  • n represents the n-th statistic window
  • i represents the sequence number of LCH
  • N represents the N-th statistic window
  • the network may estimate the amount of data of respective logical channels with the granted transmission block size and the buffer status.
  • Fig. 6 illustrates an example of a BSR report occasion and UL scheduling slot which may be used in example embodiments of the present disclosure, to describe examples of the buffer status estimation.
  • Fig. 6 there are multiple UL scheduling slots and multiple BSR report occasions, and the BSR report occasions may be periodic.
  • the statistic window may be between 2 BSR events of the historical data sliding window.
  • the length of the statistic window for received bytes can be a fixed averaging window size or the time interval between 2 BSR events, which may be triggered in the following situations:
  • the estimating function of data size that will be received for the specific logical channel in the next granted slot (s) may be:
  • TBS granted is the next granted transmission block size, which may need to satisfy the following condition:
  • B (n) is the total buffer size from the n-th BSR in the n-th static window
  • R total (n) is the total received bytes from current UE since the n-th BSR report.
  • the data to be granted from the reported buffer by BSR (buffer_reported) for LCG i in n-the statistic window and the buffer that has already been scheduled (buffer_scheduled) in n-the statistic window may be calculated as below:
  • TBS to_be_granted ⁇ All LCG buffer_reported LCG (i) (n) -buffer_scheduled (n) (12)
  • the network may estimate the buffer status for respective logical channel.
  • the network device 110 may perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for respective LCH associated with respective slices.
  • the performing resource allocation from respective slices may comprise: determining respective resource requirements on respective slices based on buffer statuses for the respective LCHs associated with respective slices and spectral efficiency, wherein the resource allocation is performed based on the determined required resources from respective slices.
  • Per-LCH buffer statuses on gNB are estimated by ML model. How many resources to be allocated from each slice (aggregate of resources required by all bearers across different LCGs but belonging to that slice) may decide based on the per-LCH buffer statuses. The overall buffer status from all LCGs may be taken into consideration and then the resources are allocated from the respective slice (limited by the slice quota) .
  • the amount of resources required (&hence to be allocated) from slice i is an aggregate of estimated resources of each LCH of which the corresponding bearer belonging to that slice:
  • resource_requirde_from_slice (i) ⁇ b ⁇ bearers (i) (buffer_estimate (b) /bearer_SE (b)) (14)
  • bearers (i) represents the set of all bearers of slice i
  • bearer_SE (b) is the spectral efficiency of the UE of bearer b
  • buffe r_estimate (b) represents the estimated resources of bearer b (i.e. LCH b) .
  • the allocated resource status may be monitored and the monitored information may be used for subsequent resource allocation per slice.
  • the performing resource allocation from respective slices may comprises: determining required resources from respective slices based on a minimum value among respective resource requirements on respective slices, slices quotas for respective slices, available resources from respective slices limited to overall reported buffer condition and modified by correlation weights for respective slices.
  • the resource allocation is performed based on the determined required resources from respective slices.
  • the correlation weights may be determined for example, based on information on actual resources granted by the network device and resources allocated by the terminal device for the LCHs respectively associated with respective slices.
  • the target share may be calculated.
  • One such formulation is as below:
  • S is the set of all slices
  • K is the set of all LCGs across all slices
  • lcg_SE (g) is the spectral efficiency of the UE of LCG g.
  • the third term in the formulation is to limit allocation as per overall reported buffer.
  • the corr_weight (i) is the correlation weight.
  • the Number of resources (PRBs) as determined above are allocated from the slice i and provided as UL grants to UE (s) or their DRBs.
  • the correlation weights may be determined based on a credit balance or debit balance indicator related to respective slices.
  • the credit balance may indicate resource amount which resources allocated already to an LCH by the terminal device is more than actual resources granted by the network device for the LCH.
  • the debit balance may indicate resource amount which resources allocated already to an LCH by the terminal device is less than actual resources granted by the network device for the LCH.
  • a LCH when a LCH is allocated more PRBs than what is expected by the gNB according to the slice aware scheduling, then it is accounted as “credit balance” under that LCH.
  • the credit balance is equal to the difference between the allocated PRBs and the expected PRBs.
  • a logical channel (LCH) is allocated less PRBs than what is expected by the gNB according to the slice aware scheduling, then it is accounted as “debit balance” under that LCH.
  • the debit balance is equal to the difference between the expected PRBs and the allocated PRBs.
  • the credit balance or debit balance indicator for respective LCHs associated with respective slices is determined for the LCHs respectively associated with respective slices based on the information on actual resources granted by the network device and resources allocated by the terminal device.
  • C (b, t) and ⁇ b ⁇ bearers (i) D (b, t) represent the credit and debit balance of slice i, respectively.
  • C (b, t) and D (b, t) are the credit and debit balance of bearer b over a control period t.
  • the correlation weight corr_weight (i) of a slice i may be determined by:
  • the correlation weight of slices may be derived based on the above equation with ⁇ b ⁇ bearers (i) C (b, t) and ⁇ b ⁇ bearers (i) D (b, t) .
  • Table-1 An example of correlation weight of slices
  • the objective is that if a slice has more debits than credits, then the resources from that slice are used by bearers belonging to another slice. So allocation of resources from that slice needs to be reduced. Similarly, when the credit balance is more, the allocation of resources from that slice needs to increase, until reach the maximum quota of that cell.
  • the resource allocation may be performed further based on scheduling weights for the respective LCHs, and wherein the scheduling weights indicate scheduling priorities of respective LCHs associated with respective slices.
  • the resource allocation to the respective LCHs may be further performed in proportion to the scheduling weights.
  • scheduling_weight (b) of bearer b is determined by the layer-2 packet scheduler taking into consideration the 5G QoS Identifier (5QI) of the bearer and other aspects.
  • the scheduling weights are modified based on slice weights for respective slices or respective logical channels associated with respective slices, and wherein the slice weights are determined based on the determined resources requirement on respective slices and current usage of resources from respective slices.
  • slice specific weight slice_weight (i) of a slice i is derived in such a way that it reflects the promised fractional resource share of a slice from the total radio air interface resource (PRBs) over a sliding monitoring time window.
  • slice_weight (i) is used to create relative priorities among the slices via biasing scheduler decision and its value is adapted as per the actual resource consumption of the slice such that agreed slice specific quotas are maintained.
  • a simple example formulation for slice_weight of slice ‘i’ may be give as follows:
  • target_share (i) is the target share of slice i and current_resource_usage (i) is reflective of the current usage (non-zero) of resources from slice i.
  • This formulation is given just for illustrative purposes without suggesting any limitation to the protection scope of the present disclose. In other embodiments, more sophisticated formulations for the “slice_weight” computation may be applied to achieve faster and more accurate convergence to the target share.
  • slice_weight (i) may be applied on top of the actual scheduling weight of each bearers associated to the slice.
  • the scheduler will use the modified scheduling weight (i.e., slice-aware scheduling weight) to decide on the scheduling priorities between requested LCHs:
  • modified_scheduling_weight (b) slice_weight (i) *scheduling_weight (b) (18)
  • the scheduling weight for respective LCHs associated with a slice is further increased when resource consumption of the slice is below promised target share; the scheduling weight for respective LCHs associated with a slice is further reduced when resource consumption of the slice is above promised target share; or the scheduling weight for respective LCHs associated with a slice is further increased to a higher weight to speed-up convergence to the target share if the difference between the determined required resources and current usage of resources from the respective slice is larger than a certain threshold.
  • the generic slice specific weight definition and adaptation method follows the following rule:
  • the gNb allocates UL resources (the expected PRBs) to each bearer b in proportion to modified_scheduling_weight (b) .
  • the slice weight may be set to a higher value when the difference between the target share and the average resource consumption is larger.
  • slice weight is lowered and can eventually even be set to 0 when the resource consumption reaches the predefined maximum share (i.e. no scheduling of services from that slice until the average resource consumption drops below the maximum share) .
  • the gNB makes a best effort to ensure that the slice resources at gNB are allocated from the respective slice quota when requested by LCHs.
  • the resource grant may use its own discretion (as per logical channel prioritization procedure) to assign those grants to any of its active logical channels across different LCGs.
  • This solution proposed herein may help in proper allocation of UL grants from respective slices and then in continuous monitoring of allocated grants.
  • gNB will modify the scheduling weights to enable the UEs (DRBs) to make use of their slice quota on priority.
  • an operation and maintenance may download the slice related configurations and SLAs from “tenant slicing portal” and configure them on the gNB.
  • the slice related configurations comprise: slice quota -a certain percentage of cell’s total PRBs; SLAs -mode of sharing resources across slices (dedicated &shared, etc. ) ; slice priorities; and any special considerations.
  • Fig. 7 illustrates an example overall architecture showing network management system/tenant slicing portal system, RAN intelligent controller (RIC) , gNB and UE.
  • RIC RAN intelligent controller
  • UE RAN intelligent controller
  • UE may request for uplink grants at LCG level and does not consider any slice quotas while assigning grants.
  • packet scheduler is unable to send grants from respective slices when the UE sends resource requests for multiple channels in the same LCG.
  • an efficient method of “slice-volume control” may comprise estimating buffer statuses of one or more LCHs.
  • the network device 110 may build a look-up table and updated regularly to enable “slice-volume control” (i.e., the slice scheduling) in uplink transmission.
  • “slice-volume control” i.e., the slice scheduling
  • Table 2 illustrate an example look-up table.
  • Table-2 An example of Look-Up table on gNB
  • the table may have one or more of the following entries:
  • each slice gets certain %of cell’s total PRBs (RAN resources) ;
  • the gNB may monitor and keep track of uplink grant requests made by LCHs/LCGs, resources granted by gNB and received data on those channels.
  • the amount of resources (PRBs) used by UE to send data in uplink may be determined based on the achieved spectral efficiency (SE) . While granting resources, gNB knows from which slice (s) the resource were allocated and on receiving data, gNB knows on which channels the UE has sent data. If there is no correlation on the grants provided (from a specific slice to a particular LCH) and uplink data received, then it means that the UE has assigned the grants to other LCHs from same LCG or to a different LCH in a different LCG. Accordingly, the ‘Credits &Debits’ balance metrics are updated in Table-1. UEs continue to report buffer status at LCG level in BSRs as it is done now. Total UL grants allocated are still limited by this buffer status reported by UEs.
  • Fig. 8 illustrates another example of use-case which may be used in example embodiments of the present disclosure.
  • There will be just one instance of a slice for a specific service type (say, uRLLC, eMBB, mMTC etc. ) and all logical channels making use of a particular service will get resources from the same slice.
  • All logical channels multiplexed in a LCG will have their resource quotas from same slice. In this case, controlling the allocation of resources is simple but modifying the scheduling weights needs to be handled as described in the present disclosure.
  • the network device 110 may obtain buffer statuses reported by terminal devices for respective logical channel groups, LCGs, wherein all logical channels, LCHs, in a LCG are associated with a single slice, and a LCG is associated with a slice. Then the network device 110 may perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for respective LCGs.
  • the performing resource allocation from respective slices comprises: determining respective resource requirements on respective slices based on the buffer statuses for respective LCGs associated with respective slices and slice quotas for respective slices, wherein the resource allocation is performed based on the determined required resources for respective slices.
  • all bearers in a LCG have their respective quota from the same slice (say, service-specific slices like uRLLC, eMTC etc) , required number of resources (limited by the slice quota) allocated from the corresponding slice i.
  • Resource allocation to LCG g is:
  • resource_allocation_to_lcg (g) buffer_status_lcg (g) / ( ⁇ k ⁇ Sbuffer_status_lcg (k)) *s lice_quota (i) (19)
  • S is the set of all LCGs having quota in slice i.
  • the slice i quota is apportioned among all LCGs with due or respective quota in that slice i.
  • the resource allocation is performed further based on scheduling weights for respective LCGs associated with respective slices, wherein the scheduling weights indicate scheduling priorities of respective LCGs associated with respective slices.
  • the resource allocation to respective LCHs is further performed in proportion to the scheduling weights.
  • the scheduling weights are modified based on slice weights for respective slices, and wherein the slice weights are determined based on the determined resources requirement on respective slices and current usage of resources from respective slices.
  • the scheduling weight for respective LCGs associated with a slice is further increased when resource consumption of the slice is below promised target share; the scheduling weight for respective LCGs associated with a slice is further reduced when resource consumption of the slice is above promised target share; or the scheduling weight for respective LCGs associated with a slice is further increased if the difference between the determined required resources for respective slices and current usage of resources from respective slices is larger than a predetermined threshold.
  • Fig. 9 illustrates an example flowchart of a method 900 implemented at a terminal device 120 according to some embodiments of the present disclosure.
  • the method 900 will be described from the perspective of the terminal device 120 with reference to Fig. 1. It is to be understood that method 900 may further include additional blocks not shown and/or omit some shown blocks, and the scope of the present disclosure is not limited in this regard.
  • the terminal device 120 may report, to a network device, buffer statuses for respective logical channel groups, LCGs.
  • the terminal device 120 may receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
  • LCHs in a LCG are associated with two or more slices, and wherein the resource allocation from respective slices is determined based on buffer statuses for respective LCHs associated with respective slices obtained based on the buffer statuses reported for LCGs.
  • Fig. 10 is a simplified block diagram of a device 1000 that is suitable for implementing embodiments of the present disclosure.
  • the device 1000 may be provided to implement the communication device, for example the network device 110, the terminal device 120 as shown in Fig. 1.
  • the device 1000 includes one or more processors 1010, one or more memories 1020 coupled to the processor 1010, and one or more communication modules 1040 coupled to the processor 1010.
  • the communication module 1040 is for bidirectional communications.
  • the communication module 1040 has at least one antenna to facilitate communication.
  • the communication interface may represent any interface that is necessary for communication with other network elements.
  • the processor 1010 may be of any type suitable to the local technical network and may include one or more of the following: general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multicore processor architecture, as non-limiting examples.
  • the device 1000 may have multiple processors, such as an application specific integrated circuit chip that is slaved in time to a clock which synchronizes the main processor.
  • the memory 1020 may include one or more non-volatile memories and one or more volatile memories.
  • the non-volatile memories include, but are not limited to, a Read Only Memory (ROM) 1024, an electrically programmable read only memory (EPROM) , a flash memory, a hard disk, a compact disc (CD) , a digital video disk (DVD) , and other magnetic storage and/or optical storage.
  • ROM Read Only Memory
  • EPROM electrically programmable read only memory
  • flash memory a hard disk
  • CD compact disc
  • DVD digital video disk
  • RAM random access memory
  • a computer program 1030 includes computer executable instructions that are executed by the associated processor 1010.
  • the program 1030 may be stored in the ROM 1024.
  • the processor 1010 may perform any suitable actions and processing by loading the program 1030 into the RAM 1022.
  • the embodiments of the present disclosure may be implemented by means of the program 1030 so that the device 1000 may perform any process of the disclosure as discussed with reference to Figs. 2 to 9.
  • the embodiments of the present disclosure may also be implemented by hardware or by a combination of software and hardware.
  • the program 1030 may be tangibly contained in a computer readable medium which may be included in the device 1000 (such as in the memory 1020) or other storage devices that are accessible by the device 1000.
  • the device 1000 may load the program 1030 from the computer readable medium to the RAM 1022 for execution.
  • the computer readable medium may include any types of tangible non-volatile storage, such as ROM, EPROM, a flash memory, a hard disk, CD, DVD, and the like.
  • Fig. 11 shows an example of the computer readable medium 1100 in form of CD or DVD.
  • the computer readable medium has the program 1030 stored thereon.
  • various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representations, it is to be understood that the block, apparatus, system, technique or method described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer readable storage medium.
  • the computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the method 400 and 900 as described above with reference to Figs. 2-9.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
  • Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media.
  • Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
  • the program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • the computer program codes or related data may be carried by any suitable carrier to enable the device, apparatus or processor to perform various processes and operations as described above.
  • Examples of the carrier include a signal, computer readable medium, and the like.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD-ROM) , an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Embodiments of the present disclosure discloses a method and apparatus for slice scheduling. A network device determines buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs. Based on the buffer statuses for the respective LCHs and quotas associated with respective slices, the network device performs resource allocation from respective slices to the terminal devices. In this way, the network device may allocate resources to LCHs as per their resource slice quotas on a best-effort basis, including the complicated case where the LCHs are from the same LCG but have quotas in different slices. Therefore, RAN slice resource quotas in uplink direction is effectively controlled and UL grants from respective slices are properly allocated.

Description

METHOD AND APPARATUS FOR SLICE SCHEDULING FIELD
Embodiments of the present disclosure generally relate to the field of communication, and in particular, to a method, device, apparatus and a computer readable storage medium for slice scheduling.
BACKGROUND
With the development of communication technology, many of the services, e.g., enhance mobile broadband, eMBB, massive machine type communications, mMTC, ultra-reliable low latency communications, uRLLC, etc. are quite demanding on high bandwidth, low-latency and ultra-reliability. Network slicing is a technology that may support these services simultaneously with service differentiation and guaranteed performance. Network slicing accommodates several independent logical networks for different business needs and service level agreement (SLA) requirements while running on shared physical infrastructure.
However, slice-aware scheduling in uplink is quite complicated, and there is a need to enhance efficient control of radio access network (RAN) slice scheduling.
SUMMARY
In general, example embodiments of the present disclosure provide a solution for slice scheduling.
In a first aspect, there is provided a network device. The network device may comprise at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the network device at least to determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
In a second aspect, there is provided a terminal device. The terminal device may comprise at least one processor; and at least one memory storing instructions that, when  executed by the at least one processor, cause the terminal device at least to report, to a network device, buffer statuses for respective logical channel groups, LCGs; and receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
In a third aspect, there is provided a method implemented at a network device. The method may comprise determining buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and performing resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
In a fourth aspect, there is provided a method implemented at a terminal device. The method may comprise reporting, to a network device, buffer statuses for respective logical channel groups, LCGs; and receiving, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs and quotas associated with respective slices.
In a fifth aspect, there is provided an apparatus of a network device. The apparatus may comprise means for determining buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and means for performing resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
In a sixth aspect, there is provided an apparatus of a terminal device. The apparatus may comprise means for reporting, to a network device, buffer statuses for respective logical channel groups, LCGs; and means for receiving, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs and quotas associated with respective slices.
In a seventh aspect, there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the method according to third or fourth aspect.
In an eighth aspect, there is provided a computer program comprising instructions,  which, when executed by an apparatus, cause the apparatus at least to: determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
In a ninth aspect, there is provided a computer program comprising instructions, which, when executed by an apparatus, cause the apparatus at least to: report, to a network device, buffer statuses for respective logical channel groups, LCGs; and receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
In a tenth aspect, there is provided a network device. The network device comprises determining circuitry configured to determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and performing circuitry configured to perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices .
In an eleventh aspect, there is provided a terminal device. The terminal device comprises reporting circuitry configured to report, to a network device, buffer statuses for respective logical channel groups, LCGs; and receiving circuitry configured to receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
It is to be understood that the summary section is not intended to identify key or essential features of embodiments of the present disclosure, nor is it intended to be used to limit the scope of the present disclosure. Other features of the present disclosure will become easily comprehensible through the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
Some example embodiments will now be described with reference to the accompanying drawings, where:
Fig. 1 illustrates an example network environment in which example embodiments of the present disclosure may be implemented;
Fig. 2 illustrates an example relationship among logical channels (LCHs) , logical channel groups (LCGs) and slices in which example embodiments of the present disclosure may be implemented;
Fig. 3 illustrates an example use-case which may be used in some embodiments of the present disclosure;
Fig. 4 illustrates an example flowchart of a method implemented at a network device according to some embodiments of the present disclosure;
Fig. 5 illustrates example LCHs and corresponding buffer sizes in different slots in some embodiments of the present disclosure;
Fig. 6 illustrates example BSR report occasions and UL scheduling slots in some embodiments of the present disclosure;
Fig. 7 illustrates an example overall architecture which may be used in some embodiments of the present disclosure;
Fig. 8 illustrates another example use-case which may be used in some embodiments of the present disclosure;
Fig. 9 illustrates an example flowchart of a method implemented at a terminal device according to some embodiments of the present disclosure;
Fig. 10 illustrates an example simplified block diagram of an apparatus that is suitable for implementing embodiments of the present disclosure; and
Fig. 11 illustrates an example block diagram of an example computer readable medium in accordance with some embodiments of the present disclosure.
Throughout the drawings, the same or similar reference numerals represent the same or similar element.
DETAILED DESCRIPTION
Principle of the present disclosure will now be described with reference to some example embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein may be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It may be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
As used in this application, the term “circuitry” may refer to one or more or all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) combinations of hardware circuits and software, such as (as applicable) :
(i) a combination of analog and/or digital hardware circuit (s) with  software/firmware and
(ii) any portions of hardware processor (s) with software (including digital signal processor (s) ) , software, and memory (ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(c) hardware circuit (s) and or processor (s) , such as a microprocessor (s) or a portion of a microprocessor (s) that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
As used herein, the term “communication network” refers to a network following any suitable communication standards, such as long term evolution (LTE) , LTE-advanced (LTE-A) , wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , narrow band Internet of things (NB-IoT) and so on. Furthermore, the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including, but not limited to, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or beyond. Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will of course also be future type communication technologies and systems with which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned system.
As used herein, the term “network device” refers to a node in a communication network via which a terminal device accesses the network and receives services therefrom. The network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , an evolved NodeB (eNodeB or eNB) , a NR NB (also referred to as  a gNB) , a remote radio unit (RRU) , a radio header (RH) , a remote radio head (RRH) , a relay, a low power node such as a femto, a pico, and so forth, depending on the applied terminology and technology.
The term “terminal device” refers to any end device that may be capable of wireless communication. By way of example rather than limitation, a terminal device may also be referred to as a communication device, user equipment (UE) , a subscriber station (SS) , a portable subscriber Station, a mobile station (MS) , or an access terminal (AT) . The terminal device may include, but not limited to, a mobile phone, a cellular phone, a smart phone, voice over IP (VoIP) phones, wireless local loop phones, a tablet, a wearable terminal device, a personal digital assistant (PDA) , portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehicle-mounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , USB dongles, smart devices, wireless customer-premises equipment (CPE) , an Internet of things (loT) device, a watch or other wearable, a head-mounted display (HMD) , a vehicle, a drone, a medical device and applications (e.g., remote surgery) , an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts) , a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. In the following description, the terms “terminal device” , “communication device” , “terminal” , “user equipment” and “UE” may be used interchangeably.
As used herein, the term “network slice” refers to a logical network that provides specific network capabilities and network characteristics. Operators may divide a network into multiple virtual end-to-end networks on a unified infrastructure. Each slice is logically isolated in terms of the radio access network, the bearer network, the core network, etc., and includes its own unique delay, throughput, security and bandwidth features to meet the requirements of a wide variety of applications.
The term “logical channel” used herein refers to channels divided according to functions, which may be used to convert data formats between transport channels and bearers, etc. The term “data radio bearer” or “DRB” refers to a radio bearer only used for user plane IP packets between the air interface of the UE and the base station. It may be understood that logical channels or channels used herein may be referred to represent DRBs. In the following description, the terms “logical channel” , “channel” “data radio bearer” and  “bearer” may be used interchangeably.
As mentioned above, with the development of communication technology, many of the services such as eMBB, mMTC, uRLLC, etc. are quite demanding on high bandwidth, low-latency and ultra-reliability. Network slicing may support these services simultaneously with service differentiation and guaranteed performance and may accommodate several independent logical networks for different business needs and SLA requirements while running on shared physical infrastructure.
RAN slicing will allow new business models to evolve. A mobile operator may be able to:
i support multiple slices/public land mobile networks (PLMN) with agreed share of RAN resources indicated by SLA.
ii network slicing will allow the operator to customize the resources for given traffic characteristics, services &SLAs.
The inventors notice that while it is relatively simple to enforce “slice quota control” in downlink, there are some inherent challenges when it comes to an uplink (UL) slice scheduling, especially a RAN slice-aware scheduling. In fact, in the existing solution, the RAN slice-aware scheduling might be quite difficult.
On one hand, buffer status reports (BSR) sent by UE to the serving gNB provide details on an amount of data waiting for transmission in the UL buffers at UE. However, in BSR, there is no “slice specific” information or logical channel identifier sent by UE to gNB while requesting uplink grants. In other words, UE does not send any slice-specific details (that the resources have to be allocated from that specific slice-quota) while requesting uplink grants.
On the other hand, allocation of grants by gNB is performed at per-UE level. Upon receiving uplink grants, the UE selects bearer (s) for data transmission according to priorities and other parameters configured on UE by gNB. In other words, UE uses the standardized “logical channel prioritization” procedure (LCP) while allocating resources to transmit data in uplink. In other words, the UE does not consider any network slicing aspects. Besides, details on “from which slice the resources were allocated by gNB are not communicated to UE either.
In summary, when UE requests gNB for uplink grants, it does not specify for  which particular logical channel it is requesting (but requests at LCG level) and when the gNB grants resources, it does so at a “UE level” and does not enforce that the allocated resources have to be used for a particular logical channel belonging to a slice from whose slice quota the resources were granted. In other words, the uplink multiplexing is done according to a set of well-defined rules in the UE (as per the logical channel prioritization procedure) . This makes it difficult to manage and enforce slice specific quotas to their respective logical channels in uplink transmissions. Thus, there is a need for a solution of slice scheduling in uplink transmission.
According to embodiments of the present disclosure, there is provided a solution for slice scheduling. In this solution, a network device determines buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs. Based on the buffer statuses for respective LCHs and quotas associated with respective slices, the network device performs resource allocation from respective slices to the terminal devices. As such, in embodiments of the present disclosure, the network device may determine the amount of resources required at the slice level. Therefore, RAN slice resource quotas in uplink direction may be effectively controlled and UL grants from respective slices may appropriately allocated, thereby improving scheduling efficiency and resource utilization.
Example embodiments of the present disclosure for slice scheduling will be described below with reference to Figs. 1-11.
Fig. 1 illustrates an example network environment 100 in which example embodiments of the present disclosure may be implemented. The environment 100, which may be a part of a communication network, comprises terminal devices and network devices.
As illustrated in Fig. 1, the communication network 100 may comprise a network device 110 (hereinafter may also be referred to as a gNB 110) . The communication network 100 may further comprise a terminal device 120. The network device 110 may manage a cell. The network device 110 and the terminal device 120 may communicate data and control information to each other in the coverage of the cell. A link from the network device 110 to the terminal device 120 is referred to as a downlink (DL) , while a link from the terminal device 120 to the network device 110 is referred to as an uplink (UL) .
It is to be understood that the number of network devices and terminal devices is only for the purpose of illustration without suggesting any limitations. The system 100 may include any suitable number of network devices and terminal devices adapted for implementing embodiments of the present disclosure. Although not shown, it would be appreciated that one or more terminal devices may be located in the environment 100.
Communications in the network environment 100 may be implemented according to any proper communication protocol (s) , comprising, but not limited to, the third generation (3G) , the fourth generation (4G) , the fifth generation (5G) or beyond, wireless local network communication protocols such as Institute for Electrical and Electronics Engineers (IEEE) 802.11 and the like, and/or any other protocols currently known or to be developed in the future. Moreover, the communication may utilize any proper wireless communication technology, comprising but not limited to: Multiple-Input Multiple-Output (MIMO) , Orthogonal Frequency Division Multiplexing (OFDM) , time division multiplexing (TDM) , frequency division multiplexing (FDM) , code division multiplexing (CDM) , Bluetooth, ZigBee, and machine type communication (MTC) , enhanced mobile broadband (eMBB) , massive machine type communication (mMTC) , ultra-reliable low latency communication (URLLC) , Carrier Aggregation (CA) , Dual Connection (DC) , and New Radio Unlicensed (NR-U) technologies.
Fig. 2 illustrates an example relationship among LCHs, LCGs and slices in which example embodiments of the present disclosure may be implemented.
As illustrated in Fig. 2, a plurality of LCHs, LCH-1. LCH-2 and LCH-3, may form a LCG. Different LCHs may have their respective quotas in different slices SLICE-1, SLICE-2, SLICE-3, but that indication will not be sent to gNB while requesting grants. Also, the reported buffer status is at LCG level and not at per-LCH. It is to be understood that the number of LCHs, LCGs and slices is only for the purpose of illustration without suggesting any limitations.
As mentioned earlier, there is no direct way to map resource requests to slices to provide UL grants from the respective slice. UE uses its own discretion (as per “logical channel prioritization” ) procedure while making use of the received grants and does not consider any slice-specific aspects, and therefore the problem is further aggravated. To solve these problems, an advanced method for efficient slice-volume control of RAN slice quotas in the uplink is required.
In the present disclosure, there is provided a solution of slice scheduling wherein there are two main aspects: controlling the allocation of resources from respective slices; modifying the scheduling weights to facilitate the usage of resources (slice quota) by the respective bearer (s) .
There are millions of UEs out there in-use and it is necessary to enable them to make use of RAN slicing benefits –without software upgrades. This approach can be used within the scope of current 3GPP specifications. There are various use-cases to deal with and there exists no concrete SLAs (still evolving) , making the solution more complex. The method proposed will cover all use-cases (say, service specific slices like uRLLC, eMBB, mMTC, etc. ) and tenant specific services (like individual tenants hosting their own value added services using certain slice volume) .
The RAN resources to be controlled (rationed) may be physical resource blocks (PRB) . These resources may be divided into slices as per their percentage of quotas from the available max resources, in accordance with SLAs. Say for a FR1 TDD cell, if there are 3 slices (for example, slice-1, slice-2 and slice-3) and their quotas are 33%, 20%and 47%, and then they may get 90, 54 and 128 PRBs respectively from the total 273 PRBs in a slot. The objective of slice-volume control is to ensure that the UEs or bearers of the UEs with their due quota in specific slice (s) make use of the same one priority. As resources are not expected to go unused, any resources not used by UEs belonging to a slice are allocated to UEs from other slices in every slot as per SLAs.
In the context of RAN slicing, there are various use-cases to deal with and the complexity will only keep increasing with new services and business models. As of now, there exists no concrete SLAs (but still evolving) , making the solution more complex. For purposes of illustration, two example use-cases will be mentioned below.
Fig. 3 illustrates an example use-case which may be used in some embodiments of the present disclosure. Different logical channels belonging to a particular LCG may have resource quotas on different slices. As the BSR is at LCG-level and there is no indication of specific logical channel (s) requesting for resources, there is no direct way to map resource requests to slices. So, indirect means as disclosed in the present disclosure need to use.
Fig. 4 illustrates an example flowchart of a method 400 implemented at a network device according to some embodiments of the present disclosure. For the purpose of discussion, the method 400 will be described from the perspective of the network device  110 with reference to Fig. 1 and Fig. 3. It is to be understood that method 400 may further include additional blocks not shown and/or omit some shown blocks, and the scope of the present disclosure is not limited in this regard.
At block 410, the network device 110 may determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs.
In some embodiments, LCHs in a LCG are associated with two or more slices as illustrated in Fig. 3. In such a case, the determining buffer statuses for respective slices may comprise: obtaining the buffer statuses for respective LCHs associated with respective slices using a buffer status prediction model based on buffer statuses reported by terminal devices for LCGs.
In some embodiments, the obtaining the buffer statuses for the respective LCHs using a buffer status prediction model may comprise: estimating buffer statuses for the respective LCHs within an estimation window based on historical buffer status reports from terminal devices within a statistics window before the estimation window using the buffer status prediction model.
In an example, a machine learning (ML) model is used as the buffer status prediction model to estimate the data waiting for uplink grants on each LCH at UE. This gives an indication of how many resources to be granted (considering the spectral efficiency) to specific LCHs from their respective slices in accordance with the slice-quotas. As mentioned earlier, UE does not report per-LCH buffer status but only at LCG level. As the LCHs in the same LCG may have their quota in different slices, it is not known how many resources to be allocated from the slice (s) . So, this ML based buffer estimation provides an indication of the amount of resources to be allocated from the slice (s) .
For illustration purposes, an example scheme for the buffer status estimation will be described hereinafter.
Example Scheme of Per-LCH Buffer Status Estimation
In this example scheme, a simple linear relation between LCH’s buffer status and LCGs’ buffer status is defined. The gNB may only deduct the buffer status of LCH based on what is received, and the received bytes are the out-going data of the UE buffer. The  assumption here is the pattern of the out-going data of UE buffer is steady in a certain period.
In this scheme, R total (n) and R LC  (i) (n) are denoted for the total received bytes for a LCG and received bytes for logical channel i respectively in the n-th statistic window:
Figure PCTCN2022128343-appb-000001
Figure PCTCN2022128343-appb-000002
where n represents the sequence number of BSR and n also represents the n-th statistic window; t represents the statistic time of the statistic window; t 0 (n) is the starting time point (in slot) of the n-th statistic window, which may be the time point when BSR is received, t 0 (n+1) is the starting time point (in slot) of the (n+1) -th statistic window.
UE may have different multiplexing behaviors, but generally according to 3GPP, for each transmission block during a certain period, a logical channel may occupy a constant or prioritized part and a flexible part if there are still spare bytes in the TB.
For example, as shown in Fig. 5, logical channel LC0 occupies constant parts and flexible parts in slot 0 and slot 1. The sizes of the constant parts in slot 0 and slot 1 are the same. The sizes of the flexible parts in slot 0 and slot 1 are different, and the sizes of flexible part depend on transport block size (TBS) .
The model of the received data amount for respective logical channel may be defined as below:
R LC (i) =k LC (i) *R total+b LC (i)           (3)
where k LC (i) represents the proportion of a specific logical channel’s data regarding to all data of its logical channel group, R total represents the total received bytes for a LCG, and b LC (i) represents the bias introduced for guaranteed or prioritized bit rate of certain logical channels. The initial value for k LC (i) may be
Figure PCTCN2022128343-appb-000003
and the initial value for b LC (i) may be 0, wherein N represents the number of the LCHs in the LCG.
In some embodiments, the buffer status prediction model has a first coefficient and the second coefficient, and wherein the first coefficient denotes a proportion of data of the LCH to data of the LCG, and the second coefficient denotes a bias for a guaranteed or prioritized bit rate of the LCH.
In some embodiments, the first coefficient and the second coefficient of the buffer status prediction model are determined by: obtaining the first coefficient and the second coefficient based on historical information of de-multiplexing results of packets received by the network device by using a machine learning algorithm.
In an example, with a ML algorithm, the history of MAC entity’s de-multiplexing results of received packets may be used to estimate two coefficients k LC (i) and b LC (i) , wherein k LC (i) may be the first coefficient and b LC (i) may be the second efficient.
For example, a cost function may be set up as below:
Figure PCTCN2022128343-appb-000004
wherein n represents the sequence number of BSR and n also represents the n-th statistic window; i represents the sequence number of LCH, N represents the N-th BSR report occasion and N also represents the N-th statistic window; N 0 represents the N 0-th BSR report occasion.
And the way to find k LC (i) , b LC (i) is to achieve the smallest possible values of the cost function as follows:
Figure PCTCN2022128343-appb-000005
wherein i represents the sequence number of LCH, N represents the N-th statistic window.
In an example, a linear regression method may be used by finding the coefficients to make partial derivative == 0 as follows:
Figure PCTCN2022128343-appb-000006
Figure PCTCN2022128343-appb-000007
wherein n represents n-th statistic window; i represents the sequence number of LCH, N represents the N-th statistic window, and k LC (i) (N) represents the proportion of a specific logical channel’s data regarding to all data of its logical channel group in the N-th statistic window, andb LC  (i) (N) represents the bias introduced for guaranteed or prioritized bit rate of certain logical channels in the N-th statistic window.
Then, the estimation functions for k LC (i) and b LC (i) may be got:
Figure PCTCN2022128343-appb-000008
Figure PCTCN2022128343-appb-000009
wherein n represents the n-th statistic window; i represents the sequence number of LCH, N represents the N-th statistic window.
Use the updated coefficients and periodic buffer status reports (per LCG) from UE, the network may estimate the amount of data of respective logical channels with the granted transmission block size and the buffer status.
Next, reference is made to Fig. 6, which illustrates an example of a BSR report occasion and UL scheduling slot which may be used in example embodiments of the present disclosure, to describe examples of the buffer status estimation. As shown in Fig. 6, there are multiple UL scheduling slots and multiple BSR report occasions, and the BSR report occasions may be periodic. There are also a statistic window, an estimation window and a historical data sliding window. In the historical data sliding window, there are multiple statistic windows. The statistic window is used to collect statistic data on received bytes and the statistical received bytes may be used to estimate received bytes during the estimation window.
The statistic window may be between 2 BSR events of the historical data sliding window.
The length of the statistic window for received bytes can be a fixed averaging window size or the time interval between 2 BSR events, which may be triggered in the following situations:
(1) Arrival of data with higher priority than currently in the transmission buffer-that is, data in a logical-channel group with higher priority than the one currently being transmitted-as this may impact the scheduling decision;
(2) Periodically as controlled by a timer;
(3) If the amount of padding required to match the scheduled transport block size is larger than a buffer-status report, a buffer-status report is inserted as it is better to exploit the available payload for useful scheduling information  instead of padding if possible.
The estimating function of data size that will be received for the specific logical channel in the next granted slot (s) may be:
Figure PCTCN2022128343-appb-000010
where TBS granted is the next granted transmission block size, which may need to satisfy the following condition:
TBS granted≤B (n) -R total (n)             (11)
where B (n) is the total buffer size from the n-th BSR in the n-th static window, and R total (n) is the total received bytes from current UE since the n-th BSR report.
Assuming all data in UE’s buffer can be granted, the data to be granted from the reported buffer by BSR (buffer_reported) for LCG i in n-the statistic window and the buffer that has already been scheduled (buffer_scheduled) in n-the statistic window may be calculated as below:
TBS to_be_granted=∑ All LCGbuffer_reported LCG (i) (n) -buffer_scheduled (n)       (12)
So the buffer per LCH may be estimated as below:
Figure PCTCN2022128343-appb-000011
In this way, the network may estimate the buffer status for respective logical channel.
Reference is made back to Fig. 4 and at block 420, the network device 110 may perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for respective LCH associated with respective slices.
In some embodiments, the performing resource allocation from respective slices may comprise: determining respective resource requirements on respective slices based on buffer statuses for the respective LCHs associated with respective slices and spectral efficiency, wherein the resource allocation is performed based on the determined required resources from respective slices.
In an example, as shown in Fig. 3, different bearers from same LCG have their quota on different slices, the following logic may be used.
Per-LCH buffer statuses on gNB are estimated by ML model. How many resources  to be allocated from each slice (aggregate of resources required by all bearers across different LCGs but belonging to that slice) may decide based on the per-LCH buffer statuses. The overall buffer status from all LCGs may be taken into consideration and then the resources are allocated from the respective slice (limited by the slice quota) .
In some embodiments, the amount of resources required (&hence to be allocated) from slice i is an aggregate of estimated resources of each LCH of which the corresponding bearer belonging to that slice:
resource_requirde_from_slice (i) =∑ b∈bearers (i) (buffer_estimate (b) /bearer_SE (b))     (14)
where bearers (i) represents the set of all bearers of slice i , bearer_SE (b) is the spectral efficiency of the UE of bearer b, and buffe r_estimate (b) represents the estimated resources of bearer b (i.e. LCH b) .
Allocated Resource Status Monitoring
In some embodiments, the allocated resource status may be monitored and the monitored information may be used for subsequent resource allocation per slice.
In some embodiments, the performing resource allocation from respective slices may comprises: determining required resources from respective slices based on a minimum value among respective resource requirements on respective slices, slices quotas for respective slices, available resources from respective slices limited to overall reported buffer condition and modified by correlation weights for respective slices. In such a case, the resource allocation is performed based on the determined required resources from respective slices. The correlation weights may be determined for example, based on information on actual resources granted by the network device and resources allocated by the terminal device for the LCHs respectively associated with respective slices.
In an example, taking into consideration the other aspects like the slice quota % (of the max PRBs) , the target share may be calculated. One such formulation is as below:
Figure PCTCN2022128343-appb-000012
where S is the set of all slices, K is the set of all LCGs across all slices and lcg_SE (g) is the spectral efficiency of the UE of LCG g. The third term in the formulation is to limit allocation as per overall reported buffer. The corr_weight (i) is the correlation weight. The Number of resources (PRBs) as determined above are allocated from the slice i and provided as UL grants to UE (s) or their DRBs.
In some embodiments, the correlation weights may be determined based on a credit balance or debit balance indicator related to respective slices. The credit balance may indicate resource amount which resources allocated already to an LCH by the terminal device is more than actual resources granted by the network device for the LCH. The debit balance may indicate resource amount which resources allocated already to an LCH by the terminal device is less than actual resources granted by the network device for the LCH.
In some embodiments, when a LCH is allocated more PRBs than what is expected by the gNB according to the slice aware scheduling, then it is accounted as “credit balance” under that LCH. The credit balance is equal to the difference between the allocated PRBs and the expected PRBs. In contrast, when a logical channel (LCH) is allocated less PRBs than what is expected by the gNB according to the slice aware scheduling, then it is accounted as “debit balance” under that LCH. The debit balance is equal to the difference between the expected PRBs and the allocated PRBs.
In some embodiment, the credit balance or debit balance indicator for respective LCHs associated with respective slices is determined for the LCHs respectively associated with respective slices based on the information on actual resources granted by the network device and resources allocated by the terminal device.
For example, in a slice i, if we add up (over a control period t) all such LCHs then:
b∈bearers (i) C (b, t) and ∑ b∈bearers (i) D (b, t) represent the credit and debit balance of slice i, respectively. C (b, t) and D (b, t) are the credit and debit balance of bearer b over a control period t.
The correlation weight corr_weight (i) of a slice i may be determined by:
Figure PCTCN2022128343-appb-000013
For example, as shown in Table-1, the correlation weight of slices may be derived based on the above equation with ∑ b∈bearers (i) C (b, t) and ∑ b∈bearers (i) D (b, t) .
Table-1: An example of correlation weight of slices
Figure PCTCN2022128343-appb-000014
For example, the available PRBs in t-th period (of 20ms) is equal to: 273 prbs*40 slots = 10920, and slice-1 has 33%quota. So, the number of PRB allocated to slice-1 is : 0.33 *10920 = 3603.
The objective is that if a slice has more debits than credits, then the resources from that slice are used by bearers belonging to another slice. So allocation of resources from that slice needs to be reduced. Similarly, when the credit balance is more, the allocation of resources from that slice needs to increase, until reach the maximum quota of that cell.
In some embodiments, the resource allocation may be performed further based on scheduling weights for the respective LCHs, and wherein the scheduling weights indicate scheduling priorities of respective LCHs associated with respective slices.
Slice Scheduling Based on Scheduling Weights
In some embodiments, the resource allocation to the respective LCHs may be further performed in proportion to the scheduling weights. In an example, scheduling_weight (b) of bearer b is determined by the layer-2 packet scheduler taking into consideration the 5G QoS Identifier (5QI) of the bearer and other aspects.
In some embodiments, the scheduling weights are modified based on slice weights for respective slices or respective logical channels associated with respective slices, and wherein the slice weights are determined based on the determined resources requirement on respective slices and current usage of resources from respective slices.
In an example, depending on the SLA definition, slice specific weight slice_weight (i) of a slice i is derived in such a way that it reflects the promised fractional resource share of a slice from the total radio air interface resource (PRBs) over a sliding monitoring time window. Thus, slice_weight (i) is used to create relative priorities among the slices via biasing scheduler decision and its value is adapted as per the actual  resource consumption of the slice such that agreed slice specific quotas are maintained.
A simple example formulation for slice_weight of slice ‘i’ may be give as follows:
Figure PCTCN2022128343-appb-000015
wherein target_share (i) is the target share of slice i and current_resource_usage (i) is reflective of the current usage (non-zero) of resources from slice i. This formulation is given just for illustrative purposes without suggesting any limitation to the protection scope of the present disclose. In other embodiments, more sophisticated formulations for the “slice_weight” computation may be applied to achieve faster and more accurate convergence to the target share.
To enforce the slice specific quota, slice_weight (i) may be applied on top of the actual scheduling weight of each bearers associated to the slice. The scheduler will use the modified scheduling weight (i.e., slice-aware scheduling weight) to decide on the scheduling priorities between requested LCHs:
modified_scheduling_weight (b) = slice_weight (i) *scheduling_weight (b)        (18)
where b refers to a bearer and i=slice (b) refers to the slice i to which bearer b belongs to.
In some embodiments, the scheduling weight for respective LCHs associated with a slice is further increased when resource consumption of the slice is below promised target share; the scheduling weight for respective LCHs associated with a slice is further reduced when resource consumption of the slice is above promised target share; or the scheduling weight for respective LCHs associated with a slice is further increased to a higher weight to speed-up convergence to the target share if the difference between the determined required resources and current usage of resources from the respective slice is larger than a certain threshold.
In some embodiments, the generic slice specific weight definition and adaptation method follows the following rule:
(1) Following a resource look-up table at the scheduler, as long as a slice resource consumption is below promised target share, services/LCHs under the slice may be allowed to be scheduled but as per their modified scheduling weight to enforce priority.
(2) The gNb allocates UL resources (the expected PRBs) to each bearer b in  proportion to modified_scheduling_weight (b) .
(3) To speed up convergence to the target share of a slice, the slice weight may be set to a higher value when the difference between the target share and the average resource consumption is larger.
(4) When scheduler detects that the resource consumption of the slice is exceeding the target share, slice weight is lowered and can eventually even be set to 0 when the resource consumption reaches the predefined maximum share (i.e. no scheduling of services from that slice until the average resource consumption drops below the maximum share) .
In other words, if the service quality/QoS of some channels are suffering because their slice quotas are being used by other channels that have used up their own quota, then their scheduling weights shall be scaled up to use their quotas and meet service quality. At the same time, L2-PS shall skip LCHs that have huge credit balance, from scheduling.
- Additionally, send notifications to tenant portal system that the usage is exceeding their quota. Either to restrict usage or to buy more resources.
During the resource allocation, the gNB makes a best effort to ensure that the slice resources at gNB are allocated from the respective slice quota when requested by LCHs. However, once the resource grant is sent to UE, it may use its own discretion (as per logical channel prioritization procedure) to assign those grants to any of its active logical channels across different LCGs. This solution proposed herein may help in proper allocation of UL grants from respective slices and then in continuous monitoring of allocated grants. Based on their resource consumption, gNB will modify the scheduling weights to enable the UEs (DRBs) to make use of their slice quota on priority.
Slice Related Configurations
Information on slice related configurations may be obtained in various different ways. In some embodiments, an operation and maintenance (OAM) may download the slice related configurations and SLAs from “tenant slicing portal” and configure them on the gNB. The slice related configurations comprise: slice quota -a certain percentage of  cell’s total PRBs; SLAs -mode of sharing resources across slices (dedicated &shared, etc. ) ; slice priorities; and any special considerations.
For illustration purposes, Fig. 7 illustrates an example overall architecture showing network management system/tenant slicing portal system, RAN intelligent controller (RIC) , gNB and UE. As shown in Fig. 7, there are a standardized E2 interface between gNB and administrator and maintenance (OAM) /RIC, and an A1 reference point between OAM/RIC and operator’s network management system/tenant slicing portal. The UE may request for uplink grants at LCG level and does not consider any slice quotas while assigning grants. At the gNB side, packet scheduler is unable to send grants from respective slices when the UE sends resource requests for multiple channels in the same LCG. As a result, an efficient method of “slice-volume control” may comprise estimating buffer statuses of one or more LCHs.
Example Look-up Table maintained at the Network Side
In some embodiments, the network device 110 may build a look-up table and updated regularly to enable “slice-volume control” (i.e., the slice scheduling) in uplink transmission. For illustrative purposes, Table 2 illustrate an example look-up table.
Table-2: An example of Look-Up table on gNB
Figure PCTCN2022128343-appb-000016
Figure PCTCN2022128343-appb-000017
As shown in Table-2, the table may have one or more of the following entries:
(1) Slice quota (min, max) as per SLAs and received over E2 interface;
- each slice gets certain %of cell’s total PRBs (RAN resources) ;
(2) Logical channel identifier (LCH ID) ;
(3) Buffer status per-LCH at UE;
- data at UE waiting for scheduling, as estimated by ML techniques;
(4) Uplink resources granted by gNB;
(5) Granted resources that are used up to send data in UL;
- updated after data is received in UL;
(6) Credit or debit balance for each LCH;
- resources granted &the actual usage per-LCH.
The gNB may monitor and keep track of uplink grant requests made by LCHs/LCGs, resources granted by gNB and received data on those channels. The amount of resources (PRBs) used by UE to send data in uplink may be determined based on the achieved spectral efficiency (SE) . While granting resources, gNB knows from which slice (s) the resource were allocated and on receiving data, gNB knows on which channels the UE has sent data. If there is no correlation on the grants provided (from a specific slice to a particular LCH) and uplink data received, then it means that the UE has assigned the grants to other LCHs from same LCG or to a different LCH in a different LCG. Accordingly, the ‘Credits &Debits’ balance metrics are updated in Table-1. UEs continue to report buffer status at LCG level in BSRs as it is done now. Total UL grants allocated are still limited by this buffer status reported by UEs.
Fig. 8 illustrates another example of use-case which may be used in example  embodiments of the present disclosure. As illustrated in Fig. 8, there are only “service specific” slices. There will be just one instance of a slice for a specific service type (say, uRLLC, eMBB, mMTC etc. ) , and all logical channels making use of a particular service will get resources from the same slice. All logical channels multiplexed in a LCG will have their resource quotas from same slice. In this case, controlling the allocation of resources is simple but modifying the scheduling weights needs to be handled as described in the present disclosure.
In some embodiments, the network device 110 may obtain buffer statuses reported by terminal devices for respective logical channel groups, LCGs, wherein all logical channels, LCHs, in a LCG are associated with a single slice, and a LCG is associated with a slice. Then the network device 110 may perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for respective LCGs.
In some embodiments, the performing resource allocation from respective slices comprises: determining respective resource requirements on respective slices based on the buffer statuses for respective LCGs associated with respective slices and slice quotas for respective slices, wherein the resource allocation is performed based on the determined required resources for respective slices.
In some embodiments, all bearers in a LCG have their respective quota from the same slice (say, service-specific slices like uRLLC, eMTC etc) , required number of resources (limited by the slice quota) allocated from the corresponding slice i. Resource allocation to LCG g is:
resource_allocation_to_lcg (g) =buffer_status_lcg (g) / (Σk∈Sbuffer_status_lcg (k)) *s lice_quota (i)               (19)
where S is the set of all LCGs having quota in slice i. Basically, the slice i quota is apportioned among all LCGs with due or respective quota in that slice i.
In some embodiments, the resource allocation is performed further based on scheduling weights for respective LCGs associated with respective slices, wherein the scheduling weights indicate scheduling priorities of respective LCGs associated with respective slices.
In some embodiments, the resource allocation to respective LCHs is further  performed in proportion to the scheduling weights.
In some embodiments, the scheduling weights are modified based on slice weights for respective slices, and wherein the slice weights are determined based on the determined resources requirement on respective slices and current usage of resources from respective slices.
In some embodiments, the scheduling weight for respective LCGs associated with a slice is further increased when resource consumption of the slice is below promised target share; the scheduling weight for respective LCGs associated with a slice is further reduced when resource consumption of the slice is above promised target share; or the scheduling weight for respective LCGs associated with a slice is further increased if the difference between the determined required resources for respective slices and current usage of resources from respective slices is larger than a predetermined threshold.
Fig. 9 illustrates an example flowchart of a method 900 implemented at a terminal device 120 according to some embodiments of the present disclosure. For the purpose of discussion, the method 900 will be described from the perspective of the terminal device 120 with reference to Fig. 1. It is to be understood that method 900 may further include additional blocks not shown and/or omit some shown blocks, and the scope of the present disclosure is not limited in this regard.
At block 910, the terminal device 120 may report, to a network device, buffer statuses for respective logical channel groups, LCGs.
At block 920, the terminal device 120 may receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
In some embodiments, LCHs in a LCG are associated with two or more slices, and wherein the resource allocation from respective slices is determined based on buffer statuses for respective LCHs associated with respective slices obtained based on the buffer statuses reported for LCGs.
Fig. 10 is a simplified block diagram of a device 1000 that is suitable for implementing embodiments of the present disclosure. The device 1000 may be provided to implement the communication device, for example the network device 110, the terminal device 120 as shown in Fig. 1. As shown, the device 1000 includes one or more processors 1010, one or more memories 1020 coupled to the processor 1010, and one or  more communication modules 1040 coupled to the processor 1010.
The communication module 1040 is for bidirectional communications. The communication module 1040 has at least one antenna to facilitate communication. The communication interface may represent any interface that is necessary for communication with other network elements.
The processor 1010 may be of any type suitable to the local technical network and may include one or more of the following: general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multicore processor architecture, as non-limiting examples. The device 1000 may have multiple processors, such as an application specific integrated circuit chip that is slaved in time to a clock which synchronizes the main processor.
The memory 1020 may include one or more non-volatile memories and one or more volatile memories. Examples of the non-volatile memories include, but are not limited to, a Read Only Memory (ROM) 1024, an electrically programmable read only memory (EPROM) , a flash memory, a hard disk, a compact disc (CD) , a digital video disk (DVD) , and other magnetic storage and/or optical storage. Examples of the volatile memories include, but are not limited to, a random access memory (RAM) 1022 and other volatile memories that will not last in the power-down duration.
A computer program 1030 includes computer executable instructions that are executed by the associated processor 1010. The program 1030 may be stored in the ROM 1024. The processor 1010 may perform any suitable actions and processing by loading the program 1030 into the RAM 1022.
The embodiments of the present disclosure may be implemented by means of the program 1030 so that the device 1000 may perform any process of the disclosure as discussed with reference to Figs. 2 to 9. The embodiments of the present disclosure may also be implemented by hardware or by a combination of software and hardware.
In some embodiments, the program 1030 may be tangibly contained in a computer readable medium which may be included in the device 1000 (such as in the memory 1020) or other storage devices that are accessible by the device 1000. The device 1000 may load the program 1030 from the computer readable medium to the RAM 1022 for execution. The computer readable medium may include any types of tangible non-volatile storage, such as ROM, EPROM, a flash memory, a hard disk, CD, DVD, and the like. Fig. 11 shows an  example of the computer readable medium 1100 in form of CD or DVD. The computer readable medium has the program 1030 stored thereon.
Generally, various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representations, it is to be understood that the block, apparatus, system, technique or method described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer readable storage medium. The computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the  method  400 and 900 as described above with reference to Figs. 2-9. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present disclosure, the computer program codes or related  data may be carried by any suitable carrier to enable the device, apparatus or processor to perform various processes and operations as described above. Examples of the carrier include a signal, computer readable medium, and the like.
The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD-ROM) , an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the present disclosure, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination.
Although the present disclosure has been described in languages specific to structural features and/or methodological acts, it is to be understood that the present disclosure defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (19)

  1. A network device, comprising:
    at least one processor; and
    at least one memory storing instructions that, when executed by the at least one processor, cause the network device at least to:
    determine buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and
    perform resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  2. The network device of claim 1, wherein LCHs in a LCG are associated with two or more slices, and wherein the determining buffer statuses for respective slices comprises:
    obtaining the buffer statuses for the respective LCHs associated with respective slices using a buffer status prediction model based on buffer statuses reported by terminal devices for LCGs.
  3. The network device of claim 1 or 2, wherein the obtaining the buffer statuses for the respective LCHs using a buffer status prediction model comprises:
    estimating buffer statuses for the respective LCHs within an estimation window based on historical buffer status reports from terminal devices within a statistics window before the estimation window using the buffer status prediction model.
  4. The network device of claim 2 or 3, wherein the buffer status prediction model has a first coefficient and the second coefficient, and wherein the first coefficient denotes a proportion of data of the LCH to data of the LCG, and the second coefficient denotes a bias for a guaranteed or prioritized bit rate of the LCH.
  5. The network device of claim 4, wherein the first coefficient and the second coefficient of the buffer status prediction model are determined by:
    obtaining the first coefficient and the second coefficient based on historical  information of de-multiplexing results of packets received by the network device by using a machine learning algorithm.
  6. The network device of any of claims 1-5, wherein the performing resource allocation from respective slices comprises:
    determining respective resource requirements on respective slices based on buffer statuses for the respective LCHs associated with respective slices and spectral efficiency, and wherein the resource allocation is performed based on the determined required resources from respective slices.
  7. The network device of any of claims 1-6, wherein performing resource allocation from respective slices comprises:
    determining required resources from respective slices based on a minimum value among respective resource requirements on respective slices, slices quotas for respective slices, available resources from respective slices limited to overall reported buffer condition and modified by correlation weights for respective slices;
    wherein the resource allocation is performed based on the determined required resources from respective slices; and
    wherein the correlation weights are determined based on information on actual resources granted by the network device and resources allocated by the terminal device for the LCHs respectively associated with respective slices.
  8. The network device of claim 7, wherein a credit balance or debit balance indicator for respective LCHs associated with respective slices is determined for the LCHs respectively associated with respective slices based on the information on actual resources granted by the network device and resources allocated by the terminal device,
    wherein the credit balance indicates resource amount which resources allocated already to an LCH by the terminal device is more than actual resources granted by the network device for the LCH;
    wherein the debit balance indicates resource amount which resources allocated already to an LCH by the terminal device is less than actual resources granted by the network device for the LCH; and
    wherein the correlation weights are determined based on the credit balance or debit balance indicator related to respective slices.
  9. The network device of any of claims 1 to 8, wherein the resource allocation is performed further based on scheduling weights for the respective LCHs, and wherein the scheduling weights indicate scheduling priorities of respective LCHs associated with respective slices.
  10. The network device of claim 9, wherein the resource allocation to the respective LCHs is further performed in proportion to the scheduling weights.
  11. The network device of claim 9 or 10, wherein the scheduling weights are modified based on slice weights for respective slices or respective logical channels associated with respective slices, and wherein the slice weights are determined based on the determined resources requirement on respective slices and current usage of resources from respective slices.
  12. The network device of any of claims 9 to 11, wherein at least one of:
    the scheduling weight for respective LCHs associated with a slice is further increased when resource consumption of the slice is below promised target share;
    the scheduling weight for respective LCHs associated with a slice is further reduced when resource consumption of the slice is above promised target share; or
    the scheduling weight for respective LCHs associated with a slice is further increased to a higher weight to speed-up convergence to the target share if the difference between the determined required resources and current usage of resources from the respective slice is larger than a certain threshold.
  13. A terminal device, comprising:
    at least one processor; and
    at least one memory storing instructions that, when executed by the at least one processor, cause the terminal device at least to:
    report, to a network device, buffer statuses for respective logical channel groups, LCGs; and
    receive, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs.
  14. The terminal device of Claim 13, wherein LCHs in a LCG are associated with two or more slices, and wherein the resource allocation from respective slices is determined based on buffer statuses for respective LCHs associated with respective slices obtained based on the buffer statuses reported for LCGs.
  15. A method, comprising:
    determining, at a network device, buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and
    performing resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  16. A method, comprising:
    reporting, at a terminal device to a network device, buffer statuses for respective logical channel groups, LCGs; and
    receiving, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs and quotas associated with respective slices.
  17. An apparatus, comprising:
    means for determining, at a network device, buffer statuses for respective logical channels, LCHs, associated with respective slices based on buffer statuses reported by terminal devices for logical channel groups, LCGs; and
    means for performing resource allocation from respective slices to the terminal devices, based on the buffer statuses for the respective LCHs and quotas associated with respective slices.
  18. An apparatus, comprising:
    means for reporting, at a terminal device to a network device, buffer statuses for respective logical channel groups, LCGs; and
    means for receiving, from the network device, resource allocation from respective slices, wherein the resource allocation from respective slices is determined based on the buffer statuses reported for LCGs and quotas associated with respective slices.
  19. A non-transitory computer readable medium comprising program instructions that, when executed by an apparatus, cause the apparatus to perform at least the method of claim 15 or 16.
PCT/CN2022/128343 2022-08-15 2022-10-28 Method and apparatus for slice scheduling WO2024036753A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/112620 WO2024036460A1 (en) 2022-08-15 2022-08-15 Methods and apparatuses for slice scheduling
CNPCT/CN2022/112620 2022-08-15

Publications (1)

Publication Number Publication Date
WO2024036753A1 true WO2024036753A1 (en) 2024-02-22

Family

ID=89940349

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/112620 WO2024036460A1 (en) 2022-08-15 2022-08-15 Methods and apparatuses for slice scheduling
PCT/CN2022/128343 WO2024036753A1 (en) 2022-08-15 2022-10-28 Method and apparatus for slice scheduling

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112620 WO2024036460A1 (en) 2022-08-15 2022-08-15 Methods and apparatuses for slice scheduling

Country Status (2)

Country Link
CN (1) CN117897934A (en)
WO (2) WO2024036460A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170164229A1 (en) * 2015-12-08 2017-06-08 Huawei Technologies Co., Ltd. Method and apparatus for remote buffer status maintenance
WO2018059317A1 (en) * 2016-09-30 2018-04-05 中兴通讯股份有限公司 Method and apparatus for managing network slice and computer storage medium
WO2021002784A1 (en) * 2019-07-01 2021-01-07 Telefonaktiebolaget Lm Ericsson (Publ) Uplink scheduling

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2574247A (en) * 2018-05-31 2019-12-04 Nec Corp Communication system
US11910243B2 (en) * 2018-11-02 2024-02-20 Nokia Solutions And Networks Oy Methods and apparatuses for network slice minimum and maximum resource quotas
CN114651482A (en) * 2019-11-06 2022-06-21 三星电子株式会社 Method and apparatus for controlling network slicing in wireless communication system
WO2021187829A1 (en) * 2020-03-17 2021-09-23 엘지전자 주식회사 Communication related to network slice
US11659512B2 (en) * 2020-03-17 2023-05-23 Apple Inc. Knowledge of slice quota availability for a UE
CN112543508A (en) * 2020-12-17 2021-03-23 国网安徽省电力有限公司信息通信分公司 Wireless resource allocation method and network architecture for 5G network slice

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170164229A1 (en) * 2015-12-08 2017-06-08 Huawei Technologies Co., Ltd. Method and apparatus for remote buffer status maintenance
WO2018059317A1 (en) * 2016-09-30 2018-04-05 中兴通讯股份有限公司 Method and apparatus for managing network slice and computer storage medium
WO2021002784A1 (en) * 2019-07-01 2021-01-07 Telefonaktiebolaget Lm Ericsson (Publ) Uplink scheduling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUAWEI, HISILICON: "Discussion on restricting the rate per UE per network slice", 3GPP DRAFT; R2-2010183, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG2, no. electronic; 20201102 - 20201113, 27 October 2020 (2020-10-27), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , XP051947945 *
NEC: "KI#4 New Sol#X: Network slice quota event notification", 3GPP DRAFT; S2-2003628, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG2, no. Electronic, Elbonia; 20200601 - 20200612, 22 May 2020 (2020-05-22), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052460434 *

Also Published As

Publication number Publication date
WO2024036460A1 (en) 2024-02-22
CN117897934A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
KR101148439B1 (en) Uplink scheduling for ofdm systems
US10660118B2 (en) Logical channel priority reconfiguration for MAC-CES in NR
CN111149312B (en) Method and apparatus for wireless communication
JP5413631B2 (en) Load estimation to meet specified service quality
US11889342B2 (en) Method and apparatus for quality of service management
US20190090229A1 (en) Radio access network node, external node, and method therefor
JP7367838B2 (en) Sidelink scheduling request triggering method, device and system
US11690068B2 (en) Optimal BSR for limited traffic mix
US20190075586A1 (en) Radio access network node, external node, and method therefor
WO2023126857A1 (en) Design of delay-aware bsr for xr applications
JP2015537403A (en) Apparatus, method, and computer program for scheduling data transmission
EP3304989B1 (en) Systems and methods for radio resource allocation across multiple resource dimensions
Overbeck et al. Proactive resource management for predictive 5G uplink slicing
US11452113B2 (en) Method and base station for CSG aware scheduling in wireless network
US11812444B2 (en) Resource scheduling between network nodes
WO2024036753A1 (en) Method and apparatus for slice scheduling
WO2018210572A1 (en) Scheduling mechanism for ultra-reliable low-latency communication data transmissions
US20180288766A1 (en) Method for allocating time-frequency resources for the transmission of data packets via a frequency selective channel
WO2024007153A1 (en) Report triggering
Rao et al. QoS based radio resource management techniques for next generation MU-MIMO WLANs: A survey
WO2022151029A1 (en) Methods, devices, and computer readable medium for communication
US20230327844A1 (en) Time division duplex pattern configuration for cellular networks
US20240098639A1 (en) Systems and methods for connected mode discontinuous reception on/off determinations
Hendaoui et al. Improved downlink scheduler for overloaded 5G networks
Avramova et al. Evaluation of a cross layer scheduling algorithm for LTE downlink

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22955525

Country of ref document: EP

Kind code of ref document: A1